A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | AA | AB | AC | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | ID | Presenters | Project | Okay to share recording? | Affiliation | Title | Abstract | Discussion Topics | Presentation Duration | Discussion Duration | Time & Date Requests | File Name | ||||||||||||||||||
2 | Yes | 102 | Enrico Masala | JEG-Hybrid | Yes | Politecnico di Torino University | IGVQM Current Status & Results | Current results about the IGVQM project, aimed at producing a document containing guidelines for VQM users. Motivation: A variety of metrics, tools and procedures have been developed, but unfortunately it is not always clear how to use them to correctly measure the improvements, even for some researchers within the video quality assessment research community. The IGVQM document aims at filling this gap by suggesting how to best use the metrics in common situations and settings, specifically focusing on impairments induced by video compression and scaling, and how to avoid common and subtle mistakes in these contexts. Every indication aims to be supported by statistical analysis of experiments relying on publicly available subjectively annotated datasets and software, so that every result contained in this document can be fully reproduced. A list of the aspects this document aims to consider include but is not limited to: which reference implementations are recommended for metric computation, implementation complexity of the metrics (e.g. in terms of running time), temporal aggregation methods for image quality metrics, standard logistic mappings of objective metrics to a normalized linear scale, the effect of common mistakes on quality evaluation. | 15 min | 10 min | ||||||||||||||||||||
3 | Yes | 103 | Lohic Fotio Tiotsop | JEG-Hybrid | Yes | Politecnico di Torino University | A software for quality recovery in subjective quality assessment experiments - comparison of results | The talk will present a software implementing 7 methods from 7 works in literature that aim at quality recovery in subjective quality assessment experiments. Also, comparison of the recovered results will be presented | 15 min | 5 min | 9am (remote from EU, early morning is preferable) | |||||||||||||||||||
4 | Yes | 104 | Lohic Fotio Tiotsop | JEG-Hybrid | Yes | Politecnico di Torino University | A non-parametric approach to subjective media quality recovery in the presence of spammer annotators | Spammer annotators in Quality of Experience (QoE) assessments provide unreliable ratings, often scoring randomly or assigning extreme ratings, which introduces noise and compromises data reliability. Modern methods to mitigate the effects of this noise rely on parametric approaches like maximum likelihood estimation or Bayesian techniques. These methods are sensitive to model assumptions and might therefore suffer from a lack of robustness. This paper proposes a non-parametric approach to measure annotator reliability and introduces the Non-Parametric subjective Quality Recovery (NPQR) algorithm, which is shown to compare favorably to state-of-the-art methods in terms of robustness against spammers. | 20 min | 10 min | 9am (remote from EU, early morning is preferable) | |||||||||||||||||||
5 | Yes | 105 | 5G-KPI team | 5G-KPI | Yes | VQEG | VQEG Whitepaper on QoE management in telecommunication networks | The 5GKPI working has completed a first draft of a white paper and will share insights and recommendations on actionable controls and performance metrics that the Content Application Providers (CAPs) and Network Service Providers (NSPs) can use to infer, measure and manage QoE, also jointly through appropriate expose/exchange interfaces for the realization of collaborative approach to optimize the QoE. To ensure widespread market adoption, we will evaluate opportunities to propose these recommendations to relevant standardization bodies. | 60 min | incuded | At least one hour in total. Maybe more? | |||||||||||||||||||
6 | Yes | 106 | Dawid Juszka | MEHF | Yes | AGH University of Krakow | Impact of valance and arousal of video content on subjective QoE assessment scores | 10 min | 5 min | possibly not on 5th of May | ||||||||||||||||||||
7 | Yes | 107 | Maria Martini | JEG-Hybrid | Yes | Kingston University London | SSIM-based quality assessment of transcoded video | Video transcoding is necessary in adaptive video streaming to create multiple representations of a video for content adaptation. The calculation of the quality of the transcoded video is often done via objective quality metrics with reference to the already compressed source image/video; however, this does not provide information on the actual quality of the transcoded video with respect to the original uncompressed reference. Via statistical considerations and assumptions, an SSIM-based video transcoding quality metric is derived, linking the global quality of the transcoded video (vs. orginal uncompressed video) to the quality of the ingested video and the quality of the video output by the transcoder versus the ingested video. | 15 min | 5 min | ||||||||||||||||||||
8 | Yes | 108 | Syed Uddin | MEHF | Yes | AGH University of Krakow | Subjective Evaluation of HAS and Low Latency Algorithms for enhanced QoE | The demand for multimedia traffic over internet is exponentially growing. HTTP adaptive streaming (HAS) is the leading video delivery system that delivers high quality video to the end-user.The adaptive bitrate algorithms (ABR) running on the HTTP client select the highest feasible video quality by adjusting the quality according to the fluctuating network conditions. Recently, low-latency ABR algorithms have been introduced to reduce the end-to-end latency commonly experienced in HAS. In this work, we present an evaluation of low-latency algorithms and compare their performance with traditional DASH-based ABR algorithms across multiple QoE metrics, various network conditions, and diverse content types. Additionally, we conduct an extensive subjective test to evaluate the impact of video quality variations on QoE. | 10 min | 5 min | ||||||||||||||||||||
9 | Yes | 109 | Pablo Pérez, Marta Orduna, Kamil Koniuch | 5G-KPI | Yes | Nokia, AGH University of Krakow | A QoE model for 5G networks | Design guidelines and proposal of a simple but practical QoE model for communication networks, with a focus on 5G/6G compatibility | 20 min | 5 min | ||||||||||||||||||||
10 | Yes | 110 | Tomasz Konaszyński | MEHF | Yes | AGH University of Krakow | Human and contextual bias in QoE. Should QoE researchers restrict access for testers in bad psychophysical condition ? | Summary of the experiment on the impact of testers’ psychophysical condition, declared at the beginning of the research process, i.e. the level of tiredness and the mood in which the tester approaches the experiment on the subjective assessment | 15 min | 5 min | 9am (early morning strongly preferable due to EU location), any day | |||||||||||||||||||
11 | Yes | 111 | Avrajyoti Dutta | MEHF | Yes | AGH University of Krakow | Human Factors Influencing on Crowdsourcing Subjective Video Quality Assessment | This research investigates the human factors affecting subjective video quality evaluation via a crowdsourced experiment employing the Absolute Category Rating (ACR) method. Analyzing over 7,900 ratings from 47 participants, we found that cognitive burden and contextual biases impact perceived video quality. Our results highlight the need for integrating perceptual science with computational models and advocate for the use of subjective assessment methods in multimedia systems development to enhance Quality of Experience (QoE) understanding. | 10 min | 5 min | 9am (early morning strongly preferable due to EU location), any day | |||||||||||||||||||
12 | Yes | 112 | Gareth Rendle, Felix Immohr | IMG | Yes | Bauhaus-Universität Weimar, TU Ilmenau | Influence of Audiovisual Realism on Communication Behaviour in Group-to-Group Telepresence | Group-to-group telepresence systems immerse geographically separated groups in a shared interaction space where remote users are represented as avatars. Notably, such systems allow users to interact with collocated and remote interlocutors simultaneously. In this context, where virtual user representations can be directly compared with real users, we investigate how visual realism (avatar type) and aural realism (presence of spatial audio) affect communication. Furthermore, we examine how communication differs between collocated and remote pairs of interlocutors. In our user study, groups of four participants perform a collaborative conversation task under the aforementioned visual and aural realism conditions. Our results indicate that avatar realism has positive effects on subjective ratings of perceived message understanding and group cohesion, and yields behavioural differences that indicate more interactivity and engagement. Few significant effects of aural realism were observed. Comparisons between collocated and remote communication found that collocated communication was perceived as more effective, but that more visual attention was paid to both remote participants than the collocated user. | 12 min | 5 min | ||||||||||||||||||||
13 | Yes | 113 | Anton Lammert, Gareth Rendle, Felix Immohr | IMG | Yes | Bauhaus-Universität Weimar, TU Ilmenau | Immersive Study Analyzer: Collaborative Immersive Analysis of Recorded Social VR Studies | Virtual Reality (VR) has become an important tool for conducting behavioral studies in realistic, reproducible environments. We present ISA, an Immersive Study Analyzer system designed for the comprehensive analysis of social VR studies. For in-depth analysis of participant behavior, ISA records all user actions, speech, and the contextual environment of social VR studies. A key feature is the ability to review and analyze such immersive recordings collaboratively in VR, through support of behavioral coding and user-defined analysis queries for efficient identification of complex behavior. Respatialization of the recorded audio streams enables analysts to follow study participants' conversations in a natural and intuitive way. To support phases of close and loosely coupled collaboration, ISA allows joint and individual temporal navigation, and provides tools to facilitate collaboration among users at different temporal positions. An expert review confirms that ISA effectively supports collaborative immersive analysis, providing a novel and effective tool for nuanced understanding of user behavior in social VR studies. | 12 min | 5 min | ||||||||||||||||||||
14 | Yes | 114 | Ryan Lei, Qi Cai | SOGAI | Yes | Meta Platforms Inc | Learning from Subjective Evaluation of Super Resolution in Production Use Cases at Scale | With high interests from many product teams trying to leverage Super Resolution (SR) technology to improve the quality of the created videos, we conducted extensive benchmarking tests and subjective evaluation with external crowd source vendors to understand and address the following questions: + Can we leverage subjective evaluation to benchmark different super resolution algorithms? + How do objective metrics correlate with subjective quality evaluation of SR? + What are risks of apply super-resolution and how can we mitigate in production? From our study, we have obtained valuable learning. First, human subjective evaluation can identify stat-sig quality improvement from super resolution. Second, our methodology has identified promising “no-reference” objective-metric that correlate well with subjective-ratings. Third, we are able to identify some risks from super resolution and some metrics can be used for detecting and mitigating such risks. | 20 min | 5 min | ||||||||||||||||||||
15 | Yes | 115 | Mohsen Jenadeleh | SAM | Yes | University of Konstanz | Subjective Visual Quality Assessment for High-Fidelity Learning-Based Image Compression | Learning-based image compression methods, such as JPEG AI, offer improved rate-distortion and perceptual quality by leveraging deep neural networks. This study presents a comprehensive subjective visual quality assessment of JPEG AI-compressed images using the JPEG AIC-3 methodology, which quantifies differences in Just Noticeable Difference (JND) units. We created a dataset of 50 compressed images from five diverse sources and collected 96,200 triplet responses from 459 participants via crowdsourcing. JND-based quality scales were reconstructed using a unified model combining boosted and plain triplet comparisons. We also evaluated how well objective metrics align with human perception, finding that while CVVDP performed best overall, most metrics—including CVVDP—overestimated quality in the high-fidelity range. Our findings highlight the need for rigorous subjective testing when benchmarking modern codecs. We also used the Meng–Rosenthal–Rubin statistical test to assess significant differences between quality metrics. The full dataset is publicly available at https://github.com/jpeg-aic/dataset-JPEG-AI-SDR25. The corresponding publication is accessible at: https://arxiv.org/abs/2504.06301. | 10 min | 5 min | ||||||||||||||||||||
16 | Yes | 116 | Mathias Wien | ETG | Yes | RWTH Aachen University | Sequence and rate-point selection for a Call for Evidence on video compression with capability beyond VVC | This contribution reports on recent developments in MPEG AG 5 and JVET for preparations towards a CfE on video compression with capability beyond VVC. The effort can be considered as a preparation and dryrun experiment for a potential CfP in the same domain which could be issued after a successful completion of the CfE. | 15min | 5min | 9am (early session preferable due to timezone), Wed-Fri | |||||||||||||||||||
17 | Yes | 117 | Mohsen Jenadeleh, Jon Sneyers | SAM | Yes | University of Konstanz, Cloudinary | Fine-Grained HDR Image Quality Assessment | High dynamic range (HDR) and wide color gamut (WCG) technologies significantly improve color reproduction compared to standard dynamic range (SDR) and standard color gamuts, resulting in more accurate, richer, and more immersive images. However, HDR also increases data demands, challenging bandwidth efficiency and compression techniques. Advances in compression and display technologies require more precise image quality assessment, particularly in the high-fidelity range where perceptual differences are subtle. To address this gap, we introduce AIC-HDR2025, the first such HDR dataset, comprising 100 test images generated from five sources, each compressed using four codecs at five compression levels. It covers the high-fidelity range, from visible distortions to compression levels below the perceptually lossless threshold. A subjective study was conducted using the JPEG \mbox{AIC-3} test methodology, combining plain and boosted triplet comparisons. In total, 34,560 ratings were collected from 151 participants across four fully controlled labs. The results confirm that \mbox{AIC-3} enables precise HDR quality estimation, with 95\% confidence intervals averaging 0.27 at 1 JND. In addition, several recently proposed objective metrics were evaluated based on their correlation with human raters. | 20 min | 5 min | Prefer on Monday | |||||||||||||||||||
18 | Yes | 118 | Dietmar Saupe | SAM | Yes | University of Konstanz | JPEG AIC-3: A standard for fine-grained subjective assessment of image quality in the high-fidelity range | For high-quality images and videos, distinguishing between compressed and original content becomes difficult, requiring assessment methodologies beyond conventional approaches like absolute category ratings. The JPEG AIC project has developed a subjective image quality assessment methodology for high-fidelity images which currently is under review at ISO/IEC for an international standard. This presentation explains the proposed standard having the following key ingredients. Study participants respond to triplet comparisons between two distorted stimuli in the presence of the source image to assess fidelity. Boosting techniques (artefact amplification, flicker test, zooming) help observers detect compression artifacts more clearly. A rescaling process adjusts boosted quality values back to the original perceptual scale, expressed in just noticeable difference (JND) units. The scale reconstruction is by maximum likelihood estimation, jointly for boosted and plain stimuli, and with a functional approach, yielding complete distortion-rate curves instead of a pointwise reconstruction per image. Prior to scale reconstruction, unreliable data are filtered based on thresholding a combination of accuracy, consistency and order bias, followed by outlier removal, all automatic without need to specify parameters. | 30 min | 10 min | This presentation should precede Mohsen's talks 115 an 117. | |||||||||||||||||||
19 | Yes | 119 | Dietmar Saupe | SAM | Yes | University of Konstanz | Robustness and accuracy of MOS with hard and soft outlier detection | We show that there is a need for reliable and comprehensive approach for a comparative performance analysis of outlier detection methods for subjective assessment of image and video quality. To fill this gap, this work proposes and applies an empirical worst-case analysis as a general solution. Our method involves evolutionary optimization of an adversarial black-box attack on outlier detection algorithms, where the adversary maximizes the distortion of reconstructed scale values with respect to ground truth. We apply our analysis to several hard and soft outlier detection methods for absolute category ratings and show their differing performance in this stress test. In addition, we propose two new outlier detection methods with low complexity and excellent worst-case performance. | 20 min | 10 min | ||||||||||||||||||||
20 | Yes | 120 | Panagiotis Traganitis | SAM | Yes | Michigan State University | Learning from crowdsourced noisy labels | Crowdsourcing has become a powerful tool for generating reliable rankings from noisy human input, enabling tasks such as pairwise comparison aggregation and Likert scale evaluation. By leveraging the responses of many annotators, crowdsourcing not only combines diverse judgments but also estimates the reliability of each source, making it well-suited for ranking in settings with unreliable or even adversarial data. This brief talk presents a unified framework for learning from weak information sources, covering classical and modern methods for aggregating rankings while inferring annotator quality, as well as its application in ranking problems. | 25 min | 5 min | Early session preferable due to speaker being in Greece at this time | |||||||||||||||||||
21 | Yes | 121 | Silvia Casino | IMG | Yes | Nokia XR Lab | Evaluation of Segmentation Algorithms for Embodiment Improvement in an XR Application | As XR applications evolve, there is growing interest in integrating realistic avatars and objects for better user interaction. We propose enhancing segmentation algorithms to merge the user's real body and surrounding objects, improving embodiment and interaction, particularly in a table etiquette scenario. Our study includes both objective and subjective evaluations of these algorithms, with a focus on inclusivity, including participants with intellectual disabilities. The results help guide the development of more accessible XR applications. | 15 min | 5 min | Last session preferable | |||||||||||||||||||
22 | Yes | 122 | Ioannis Katsavounidis, Qi Cai, Elias Kokkinis, Shankar Regunathan | SOGAI | Yes | Meta Platforms Inc | Learning from Synergistc Subjective/Objective Evaluation of Autodubbing in Production Use Cases | TBA | 20 min | 5 min | ||||||||||||||||||||
23 | Yes | 123 | Pablo Pérez, Marta Orduna, Jesús Gutiérrez | IMG | Yes | Nokia XR Lab & Universidad Politécnica de Madrid | Status of the Rec. P.IXC | Check the status of the Rec. ITU-T P.IXC that IMG group is writting based on the joint test plan developed in the last months. | 1 h | |||||||||||||||||||||
24 | Yes | 124 | Avinab Saha | ETG | Yes | UT Austin | FaceExpressions-70k : A dataset of Perceived Expression Differences | Facial expressions are key to human communication, conveying emotions and intentions. Given the rising popularity of digital humans and avatars, the ability to accurately represent facial expressions in real time has become an important topic. However, quantifying perceived differences between pairs of expressions is difficult, and no comprehensive subjective datasets are available for testing. This work introduces a new dataset targeting this problem: FaceExpressions-70k. Obtained via crowdsourcing, our dataset contains 70,500 subjective expression comparisons rated by over 1,000 study participants. We demonstrate the applicability of the dataset for training perceptual expression difference models and guiding decisions on acceptable latency and sampling rates for facial expressions when driving a face avatar. | 15 min | 5min | Will prefer on May 5-6 early in the day (like 9-11 am Pacific Time) due to time zone differences. | |||||||||||||||||||
25 | No | 125 | (duplicate entry) | 15 minutes | 5 min | |||||||||||||||||||||||||
26 | Yes | 126 | Kamil Koniuch, Norbert Barczyk, Lucjan Janowski, Mateusz Olszewski | IMG | yes | AGH University of Krakow | VR Games Based on Circumplex Model of Group Tasks for QoE Measurements and Beyond | 15 minutes | 5 min | |||||||||||||||||||||
27 | Yes | 127 | Kamil Koniuch | SOGAI | yes | AGH University of Krakow | Cognitive persperctive on ACR scale tests | 20 min | 10 min | |||||||||||||||||||||
28 | Yes | 128 | Effrosyni Doutsi | ETG | yes | Foundation for Research and Technology - Hellas | Beyond Pixels: Novel Evaluation Frameworks for Spike-Based Compression Mechanisms | The brain compresses the world around us not with brute force, but with elegance — through selective spikes, sparse coding, and intelligent loss of detail. As we move toward compression algorithms inspired by these principles, we must also rethink how we measure success. Traditional metrics built for pixels and frames miss the deeper story of information, energy, and perception. In this talk, we explore a future where compression is evaluated through biological lenses: by how well meaning is preserved, how efficiently energy is used, and how closely the patterns of spikes echo natural intelligence. Rather than comparing numbers, we invite a shift in mindset — from data fidelity to information relevance — as we imagine the next generation of neuro-inspired compression systems. | 10 min | 5 min | ||||||||||||||||||||
29 | Yes | 129 | Ludo Malfait | SAM | BT | VPN and remote desktop users on crowdsourcing platforms | 20min ??? | |||||||||||||||||||||||
30 | Yes | 130 | Xuemei Zhou | IMG | yes | CWI & TU Delft | Point Cloud Quality Assessment and Visual Saliency | I will introduce aTask-Free eye-tracking dataset and a task-depedent dataset for Dynamic Point Clouds (TF-DPC) aimed at investigating visual attention, and the comparsion of the task imapct. The datasets are composed of eye gaze and head movements collected from 40/24 participants observing 50/19 scanned dynamic point clouds in a Virtual Reality (VR) environment with 6 degrees of freedom. We compare the visual saliency maps generated from these two different lab settings to explore how high-level tasks influence human visual attention. To measure the similarity between these visual saliency maps, we apply the well-known Pearson correlation coefficient and an adapted version of the Earth Mover's Distance metric, which takes into account both spatial information and the degrees of saliency. Our experimental results provide both qualitative and quantitative insights, revealing significant differences in visual attention due to task influence. | 20mins | 5mins | 9am (remote from Amsterdam, early morning is preferable) | |||||||||||||||||||
31 | Yes | 131 | Lumi Xia | QAH | yes | INSA Rennes | Task-based Medical Image Quality Assessment by Numerical Observer | Unlike natural images, whose quality can be easily evaluated both perceptually, by our intuitive “good-looking” standard, and scientifically, by various image quality assessment (IQA) metrics, medical images lack common standards or appropriate mathematical criteria for their evaluation. Given their role as diagnostic tools, conventional IQA metrics are often insufficient or irrelevant for assessing their quality. Instead, medical image quality assessment focuses on how much diagnostic information is conveyed and how well it is preserved. Task-based model observers are proposed to meet this goal. Moreover, different tasks must be considered — for instance, lesion detection requires much less information than lesion characterization. This thesis introduces task-based model observers for evaluating medical images. The approach is first validated using a simple COVID-19 detection task. The main task focuses on 3D adrenal lesion characterization on multiphase CT scans, aiming to evaluate the impact of radiation dose reduction on image quality. A U-Net based model is employed to generate lower dose CT (LDCT) images from normal dose CT (NDCT) data. Two model observers are developed: an image-processing-based observer for 3D washout calculation, and a deep-learning-based observer for lesion localization. These models are applied to NDCT and simulated LDCT images at varying dose levels, and their performance is used to assess image quality. | 40 min | 20min | 11 am (remote from France) | |||||||||||||||||||
32 | Yes | 132 | Ajit Ninan | Keynote | Meta Platforms Inc | Rethinking Visual Quality for Perceptual displays | ||||||||||||||||||||||||
33 | Yes | 133 | Pr. Patrick Le Callet | SOGAI/IMG | yes | Nantes Universite/SJTU | AIGCOIQA2024: Perceptual Quality Assessment of AI Generated Omnidirectional Images | 10 min | 5 min | |||||||||||||||||||||
34 | Yes | 134 | Pr. Patrick Le Callet | IMG | yes | NantesUniversité/SJTU | ESIQA: Perceptual Quality Assessment of Vision-Pro-based�Egocentric Spatial Images | 10 min | 5 min | |||||||||||||||||||||
35 | Yes | 135 | Pr. Patrick Le Callet | IMG | yes | Nantes Universite/CSTB | Interactions between vibroacoustic discomfort and visual stimuli:�Comparison of real, 3D and 360 environments | 10 min | 5 min | |||||||||||||||||||||
36 | Yes | 136 | Pr. Patrick Le Callet | IMG/QAH | yes | Nantes Universite/ IUF | Orientation and mobility test in virtual reality, a tool for quantitative assessment of functional vision: dataset and evaluation in healthy subjects | 10 min | 5 min | |||||||||||||||||||||
37 | Yes | 137 | David Ronca | ETG | yes | Meta Platforms Inc | VCAT: Video Codec Acid Test | 15 | 5 min | |||||||||||||||||||||
38 | No | 138 | ||||||||||||||||||||||||||||
39 | No | 139 | ||||||||||||||||||||||||||||
40 | No | 140 | ||||||||||||||||||||||||||||
41 | No | 141 | ||||||||||||||||||||||||||||
42 | No | 142 | ||||||||||||||||||||||||||||
43 | No | 143 | ||||||||||||||||||||||||||||
44 | No | 144 | ||||||||||||||||||||||||||||
45 | No | 145 | ||||||||||||||||||||||||||||
46 | No | 146 | ||||||||||||||||||||||||||||
47 | No | 147 | ||||||||||||||||||||||||||||
48 | No | 148 | ||||||||||||||||||||||||||||
49 | No | 149 | ||||||||||||||||||||||||||||
50 | No | 150 | ||||||||||||||||||||||||||||
51 | No | 151 | ||||||||||||||||||||||||||||
52 | No | 152 | ||||||||||||||||||||||||||||
53 | No | 153 | ||||||||||||||||||||||||||||
54 | No | 154 | ||||||||||||||||||||||||||||
55 | No | 155 | ||||||||||||||||||||||||||||
56 | No | 156 | ||||||||||||||||||||||||||||
57 | No | 157 | ||||||||||||||||||||||||||||
58 | No | 158 | ||||||||||||||||||||||||||||
59 | No | 159 | ||||||||||||||||||||||||||||
60 | No | 160 | ||||||||||||||||||||||||||||
61 | No | 161 | ||||||||||||||||||||||||||||
62 | No | 162 | ||||||||||||||||||||||||||||
63 | No | 163 | ||||||||||||||||||||||||||||
64 | No | 164 | ||||||||||||||||||||||||||||
65 | No | 165 | ||||||||||||||||||||||||||||
66 | No | 166 | ||||||||||||||||||||||||||||
67 | No | 167 | ||||||||||||||||||||||||||||
68 | No | 168 | ||||||||||||||||||||||||||||
69 | No | 169 | ||||||||||||||||||||||||||||
70 | No | 170 | ||||||||||||||||||||||||||||
71 | No | 171 | ||||||||||||||||||||||||||||
72 | No | 172 | ||||||||||||||||||||||||||||
73 | No | 173 | ||||||||||||||||||||||||||||
74 | No | 174 | ||||||||||||||||||||||||||||
75 | No | 175 | ||||||||||||||||||||||||||||
76 | No | 176 | ||||||||||||||||||||||||||||
77 | No | 177 | ||||||||||||||||||||||||||||
78 | No | 178 | ||||||||||||||||||||||||||||
79 | No | 179 | ||||||||||||||||||||||||||||
80 | No | 180 | ||||||||||||||||||||||||||||
81 | No | 181 | ||||||||||||||||||||||||||||
82 | No | 182 | ||||||||||||||||||||||||||||
83 | No | 183 | ||||||||||||||||||||||||||||
84 | No | 184 | ||||||||||||||||||||||||||||
85 | No | 185 | ||||||||||||||||||||||||||||
86 | No | 186 | ||||||||||||||||||||||||||||
87 | No | 187 | ||||||||||||||||||||||||||||
88 | No | 188 | ||||||||||||||||||||||||||||
89 | No | 189 | ||||||||||||||||||||||||||||
90 | No | 190 | ||||||||||||||||||||||||||||
91 | No | 191 | ||||||||||||||||||||||||||||
92 | No | 192 | ||||||||||||||||||||||||||||
93 | No | 193 | ||||||||||||||||||||||||||||
94 | No | 194 | ||||||||||||||||||||||||||||
95 | No | 195 | ||||||||||||||||||||||||||||
96 | No | 196 | ||||||||||||||||||||||||||||
97 | No | 197 | ||||||||||||||||||||||||||||
98 | No | 198 | ||||||||||||||||||||||||||||
99 | No | 199 | ||||||||||||||||||||||||||||
100 | No | 200 |