MciteBench: A Benchmark for Multimodal Citation Text Generation in MLLMs
Paper • 2503.02589 • Published
question_type stringclasses 2
values | question stringlengths 11 421 | answer stringlengths 1 2.03k | evidence_keys stringlengths 11 61 | evidence_contents stringlengths 71 10.2k | evidence_modal stringclasses 4
values | evidence_count int64 1 4 | distractor_count int64 1 4 | info_count int64 5 5 | text_2_idx stringlengths 2 11.5k | idx_2_text stringlengths 2 11.5k | image_2_idx stringlengths 2 421 | idx_2_image stringlengths 2 421 | table_2_idx stringlengths 2 336 | idx_2_table stringlengths 2 336 | meta_data stringlengths 2 1.9k | distractor_contents stringlengths 79 11.2k | question_id stringlengths 64 64 | pdf_id stringlengths 40 40 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
explanation | Why is the performance of your method better on paraphrased datasets than on the Normal Dataset? | Regarding the occasionally better performance of Profiler (and also other baselines) on paraphrased datasets in Table 1 and Table 2, it is important to note that these are in-distribution results, where the training and test data distributions are the same. When detectors are tested in an out-of-distribution setting—wh... | ['Table 1', 'Table 2', 'Figure 4'] | ['images/28a3fb2eb860b2336250a168e4be3a619b992036b649ac8490cf8856010977aa.jpg', 'images/66b65d6ef25661f61096bcb5ba493ecd43565a459b1d58ac7031289ca040692c.jpg', 'images/ee2afb98560f1235c1389cdfa71d968a022447de87c1ce4638ab8d2f07577b6b.jpg'] | ['mixed'] | 3 | 2 | 5 | {'where V is the vocabulary of the surrogate model M, and P˜k ∈R||V ||×1 is the one-hot encoded vector of input token xk over the vocabulary list V . The calculated context losses L = [L1, · · · , LW ] are then used in the next stage to extract the inference pattern. ': '1'} | {'1': 'where V is the vocabulary of the surrogate model M, and P˜k ∈R||V ||×1 is the one-hot encoded vector of input token xk over the vocabulary list V . The calculated context losses L = [L1, · · · , LW ] are then used in the next stage to extract the inference pattern. '} | {'images/6aab4e92c975eda2500335b4f5cd9ae1c46b56dc872b79f30e841caf88d27a56.jpg': '1', 'images/ee2afb98560f1235c1389cdfa71d968a022447de87c1ce4638ab8d2f07577b6b.jpg': '4'} | {'1': 'images/6aab4e92c975eda2500335b4f5cd9ae1c46b56dc872b79f30e841caf88d27a56.jpg', '4': 'images/ee2afb98560f1235c1389cdfa71d968a022447de87c1ce4638ab8d2f07577b6b.jpg'} | {'images/28a3fb2eb860b2336250a168e4be3a619b992036b649ac8490cf8856010977aa.jpg': '1', 'images/66b65d6ef25661f61096bcb5ba493ecd43565a459b1d58ac7031289ca040692c.jpg': '2'} | {'1': 'images/28a3fb2eb860b2336250a168e4be3a619b992036b649ac8490cf8856010977aa.jpg', '2': 'images/66b65d6ef25661f61096bcb5ba493ecd43565a459b1d58ac7031289ca040692c.jpg'} | {} | ['images/6aab4e92c975eda2500335b4f5cd9ae1c46b56dc872b79f30e841caf88d27a56.jpg', 'where V is the vocabulary of the surrogate model M, and P˜k ∈R||V ||×1 is the one-hot encoded vector of input token xk over the vocabulary list V . The calculated context losses L = [L1, · · · , LW ] are then used in the next stage to extr... | a8a6339a943fa79ae72382fb9f1d022d8409510904d542963b580682babf239b | d969953a0cbdd7fa8485cf1555a32f7b3d62a7a4 |
explanation | What improvements does FacLens provide over existing methods? | Our work has clear improvements over existing works in practical applications (efficiency beyond performance) due to the following reasons. In Figure 2, we compare the ante-hoc method (FacLens) with post-hoc methods (SAPLMA and INSIDE). Unlike post-hoc methods, which rely on costly answer generation, the ante-hoc metho... | ['Figure 2', 'Table 1', 'Table 2'] | ['images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg', 'images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg', 'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg'] | ['mixed'] | 3 | 2 | 5 | {'Unsupervised domain adaptation performs well for cross-LLM FacLens. Given an LLM, we train FacLens on the training data of the corresponding domain and directly test the FacLens on the test data of another domain. The results in the upper part of Figure 6 are unsatisfactory. After unsupervised domain adaptation, the ... | {'1': 'Unsupervised domain adaptation performs well for cross-LLM FacLens. Given an LLM, we train FacLens on the training data of the corresponding domain and directly test the FacLens on the test data of another domain. The results in the upper part of Figure 6 are unsatisfactory. After unsupervised domain adaptation,... | {'images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg': '2'} | {'2': 'images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg'} | {'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg': '2', 'images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg': '1'} | {'2': 'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg', '1': 'images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg'} | {} | ['NFP Dataset Construction. Given an LLM m and a QA dataset, for each question q ∈Q, we assign a binary label y to the (m, q) pair, where y = 1 if m fails to generate the golden answer for q, and y = 0 otherwise. The goal of NFP is to predict the labels prior to answer generation. Specifically, we follow previous work ... | 4ab6d6d8dcdf8b7a45b9b9c864dc3959193bbda43c25d024ee44e0234248444d | e2297ed06ca065d361ec3f28961b352c3377db10 |
explanation | How does FacLens compare to previous methods in terms of performance? | Table 1 shows that FacLens achieves clear performance gains over most baselines. While the performance gains over LoRA and Self-Evaluation are slightly smaller, FacLens significantly outperforms both of them in terms of training efficiency (see Table 2), which is crucial for practical applications. Moreover, as shown i... | ['Table 1', 'Table 2', 'Figure 2'] | ['images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg', 'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg', 'images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg'] | ['mixed'] | 3 | 2 | 5 | {'where zS,i = genc (xS,i) , zT,j = genc (xT,j), NS = NT = |Qtrain| is the number of questions for training, and k (·) denotes a kernel function. We discuss the choice of kernel function in Appendix G. The hidden question representations are taken from the middle layer of the LLM. ': '1'} | {'1': 'where zS,i = genc (xS,i) , zT,j = genc (xT,j), NS = NT = |Qtrain| is the number of questions for training, and k (·) denotes a kernel function. We discuss the choice of kernel function in Appendix G. The hidden question representations are taken from the middle layer of the LLM. '} | {'images/add180d7870c649480bd2826bf4b5b054bf92dd72510bcbfde99e0efaf2a9972.jpg': '7', 'images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg': '2'} | {'7': 'images/add180d7870c649480bd2826bf4b5b054bf92dd72510bcbfde99e0efaf2a9972.jpg', '2': 'images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg'} | {'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg': '2', 'images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg': '1'} | {'2': 'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg', '1': 'images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg'} | {} | ['images/add180d7870c649480bd2826bf4b5b054bf92dd72510bcbfde99e0efaf2a9972.jpg', 'where zS,i = genc (xS,i) , zT,j = genc (xT,j), NS = NT = |Qtrain| is the number of questions for training, and k (·) denotes a kernel function. We discuss the choice of kernel function in Appendix G. The hidden question representations are... | 670d6826b93a707dab76d21a73b5c691457ec286bcc186606cd4c02327464670 | e2297ed06ca065d361ec3f28961b352c3377db10 |
explanation | What analyses have the authors done on how properties of the dataset affect the performance of MLLMs? | In Figure 5 of the paper, we present the relationship between the number of images and the accuracy of image association in the IITC task. From the figure, we can see the following: 1. The image association accuracy of the VEGA-base-4k model decreases as the number of images increases. 2. For the other closed-source mo... | ['Figure 5', 'Figure 4', 'Table 2'] | ['images/e4de0bd64fbf86f5f6d26dd1d132f25e1b4ab75f15d00969dc8b95148cc06e38.jpg', 'images/804f68c55932623c3d9dfb50941f9e1f5b2d9f67de4f5c63abcea211bb0f685a.jpg', 'images/c253055b0b2a2668877b56f92b77d8e7edba2c56e1e1558d781fe91e3e938c77.jpg'] | ['mixed'] | 3 | 2 | 5 | {'Ultimately, we have developed a novel dataset, designated as VEGA. It is comprised of two subsets, one tailored for the IITC task and another for the ITA task. The longest interleaved image-text content in VEGA reaches up to 8 images and 8k tokens. We design the instruction of the IITC task to be a question about onl... | {'1': 'Ultimately, we have developed a novel dataset, designated as VEGA. It is comprised of two subsets, one tailored for the IITC task and another for the ITA task. The longest interleaved image-text content in VEGA reaches up to 8 images and 8k tokens. We design the instruction of the IITC task to be a question abou... | {'images/804f68c55932623c3d9dfb50941f9e1f5b2d9f67de4f5c63abcea211bb0f685a.jpg': '4', 'images/31070a6e7c3b53f02d80b85f2a2fffaeba361f9c85891c05adfcb10e733faf05.jpg': '1', 'images/e4de0bd64fbf86f5f6d26dd1d132f25e1b4ab75f15d00969dc8b95148cc06e38.jpg': '5'} | {'4': 'images/804f68c55932623c3d9dfb50941f9e1f5b2d9f67de4f5c63abcea211bb0f685a.jpg', '1': 'images/31070a6e7c3b53f02d80b85f2a2fffaeba361f9c85891c05adfcb10e733faf05.jpg', '5': 'images/e4de0bd64fbf86f5f6d26dd1d132f25e1b4ab75f15d00969dc8b95148cc06e38.jpg'} | {'images/c253055b0b2a2668877b56f92b77d8e7edba2c56e1e1558d781fe91e3e938c77.jpg': '2'} | {'2': 'images/c253055b0b2a2668877b56f92b77d8e7edba2c56e1e1558d781fe91e3e938c77.jpg'} | {} | ['images/31070a6e7c3b53f02d80b85f2a2fffaeba361f9c85891c05adfcb10e733faf05.jpg', 'Ultimately, we have developed a novel dataset, designated as VEGA. It is comprised of two subsets, one tailored for the IITC task and another for the ITA task. The longest interleaved image-text content in VEGA reaches up to 8 images and 8... | 8caf5a4e8ea45a9c61b2a596fe76417f7aa5a3d875406f1784a388872e17ead8 | ff04147bfeb3ecdb49c1ad6b729c8776be9205bc |
explanation | How does the paper address the marginal improvements observed in the experimental results? | Notice that spectral regularization is always amongst the best-performing methods in all experiments. Moreover, in several experiments, spectral regularization was significantly better than any other baseline: Figure 1 (left), Figure 2 (right), Figure 3. | ['Figure 1', 'Figure 2', 'Figure 3'] | ['images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg', 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg', 'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg'] | ['figure'] | 3 | 2 | 5 | {'Neural network initialization is key to trainability (He et al., 2015; Hinton and Salakhutdinov, 2006). One property of the initialization thought to be important is that the layerwise mapping, hl+1 = ReLU(θlhl), has a Jacobian with singular values that are close to or exactly one (Glorot and Bengio, 2010; Pennington... | {'1': 'Neural network initialization is key to trainability (He et al., 2015; Hinton and Salakhutdinov, 2006). One property of the initialization thought to be important is that the layerwise mapping, hl+1 = ReLU(θlhl), has a Jacobian with singular values that are close to or exactly one (Glorot and Bengio, 2010; Penni... | {'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg': '3', 'images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg': '1', 'images/a3485f80f366e7de3691aaad423ab16f973943002252e73580726f373f1bd657.jpg': '5', 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd... | {'3': 'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg', '1': 'images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg', '5': 'images/a3485f80f366e7de3691aaad423ab16f973943002252e73580726f373f1bd657.jpg', '2': 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54... | {} | {} | {} | ['Neural network initialization is key to trainability (He et al., 2015; Hinton and Salakhutdinov, 2006). One property of the initialization thought to be important is that the layerwise mapping, hl+1 = ReLU(θlhl), has a Jacobian with singular values that are close to or exactly one (Glorot and Bengio, 2010; Pennington... | e103290df88fe0eeb1f60aaf6df31d7daf51b5ff817ce6d7c06e4f19ca381e1f | 05fe05b0399402d34686a7b695820eaf3b6b5eca |
explanation | What improvements does spectral regularization provide over L2 regularization? | Empirically, spectral regularization is a large improvement over L2 regularization in several of our experiments, e.g. Figure 1 (left), Figure 2 (right), and Figure 3. Moreover, spectral regularization is more robust to its hyperparameter and always among the 1 or 2 best-performing methods in all of our experiments. | ['Figure 1', 'Figure 2', 'Figure 3'] | ['images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg', 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg', 'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg'] | ['figure'] | 3 | 2 | 5 | {'Loss of Trainability Mitigators In our main results, we compare spectral regularization against L2 regularization towards zero, shrink and perturb (Ash and Adams, 2020), L2 regularization towards the initialization (Kumar et al., 2023), recycling dormant neurons (ReDO, Sokar et al., 2023), concatenated ReLU (Abbas et... | {'1': 'Loss of Trainability Mitigators In our main results, we compare spectral regularization against L2 regularization towards zero, shrink and perturb (Ash and Adams, 2020), L2 regularization towards the initialization (Kumar et al., 2023), recycling dormant neurons (ReDO, Sokar et al., 2023), concatenated ReLU (Abb... | {'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg': '3', 'images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg': '1', 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg': '2'} | {'3': 'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg', '1': 'images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg', '2': 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg'} | {} | {} | {} | ['Loss of Trainability Mitigators In our main results, we compare spectral regularization against L2 regularization towards zero, shrink and perturb (Ash and Adams, 2020), L2 regularization towards the initialization (Kumar et al., 2023), recycling dormant neurons (ReDO, Sokar et al., 2023), concatenated ReLU (Abbas et... | 22132795dd4d718836bcea76aa5a9ee27154f136067d4d67d1e043271a66c6a1 | 05fe05b0399402d34686a7b695820eaf3b6b5eca |
explanation | How are passenger profiles integrated into the origin-destination matrix at the regional or stop level? | As shown in Figure 1(d), a walking distance is deemed acceptable if it is limited to 1.1 km. Concerning the average velocity, Figure 1(e), and the trip time, Figure 1(f), all registers with values greater than 80 km/h and 2 hours are unconsidered. These values were estimated by local specialists based on the passengers... | ['Figure 1'] | ['images/c5a31d9a40f25518ecd6eaeca1e01c4b3acbd744bcdc450d17a22bfd197dd041.jpg'] | ['figure'] | 1 | 4 | 5 | {'In the subsequent phase, Figure 1(c), we analyzed user types to determine the feasibility of estimating their alighting points. In Salvador, there is no device to validate the passengers’ alighting; therefore, the main challenge is to estimate it by analyzing the following boarding. Moreover, it is impossible to trac... | {'1': 'In the subsequent phase, Figure 1(c), we analyzed user types to determine the feasibility of estimating their alighting points. In Salvador, there is no device to validate the passengers’ alighting; therefore, the main challenge is to estimate it by analyzing the following boarding. Moreover, it is impossible to... | {'images/c5a31d9a40f25518ecd6eaeca1e01c4b3acbd744bcdc450d17a22bfd197dd041.jpg': '1', 'images/23f1fdc539186d67330f172d2edf9ee702c9e85cdedb350bee9c37ac4c5cfed4.jpg': '3'} | {'1': 'images/c5a31d9a40f25518ecd6eaeca1e01c4b3acbd744bcdc450d17a22bfd197dd041.jpg', '3': 'images/23f1fdc539186d67330f172d2edf9ee702c9e85cdedb350bee9c37ac4c5cfed4.jpg'} | {} | {} | {} | ['images/23f1fdc539186d67330f172d2edf9ee702c9e85cdedb350bee9c37ac4c5cfed4.jpg', 'In the subsequent phase, Figure 1(c), we analyzed user types to determine the feasibility of estimating their alighting points. In Salvador, there is no device to validate the passengers’ alighting; therefore, the main challenge is to esti... | c8f71f59ce47e86848347df22d37552cc7e4d12d8bf81a5447d0338086cffd33 | 5aa218287d89432e6fc34652ca252cfe99d92e21 |
explanation | What is the rationale for the experimental configurations chosen in the study? | Figure 4 (MAD): This figure focuses on a case study demonstrating a counter-intuitive phenomenon where introducing errors can improve performance—a rare observation in multi-agent systems. MAD was selected specifically for its relevance to this unique insight. Figure 7a (Exclusion of MAD): MAD was excluded from Figure ... | ['Figure 4', 'Figure 7', 'Figure 8'] | ['images/0b55b386169f13d50b1b7ff47bfa61c9126516d2fca0fe9057685662016c9e22.jpg', 'images/c0b3ca79844ea127d86dcdd3fcf3e95955edba6f7897f1c1e732e26c9b917a50.jpg', 'images/28fe1291bf019109651723940887ed6ff1e1b4a60028a15b648aaa959d5b622c.jpg'] | ['figure'] | 3 | 2 | 5 | {'Current LLMs prioritize natural language over code. Fig. 6b illustrates that distraction comments can mislead LLMs into accepting incorrect code as correct across all six systems studied. This indicates that the systems tend to prioritize comments over the actual code. In the example, the system detects an error in t... | {'1': 'Current LLMs prioritize natural language over code. Fig. 6b illustrates that distraction comments can mislead LLMs into accepting incorrect code as correct across all six systems studied. This indicates that the systems tend to prioritize comments over the actual code. In the example, the system detects an error... | {'images/0b55b386169f13d50b1b7ff47bfa61c9126516d2fca0fe9057685662016c9e22.jpg': '4', 'images/c0b3ca79844ea127d86dcdd3fcf3e95955edba6f7897f1c1e732e26c9b917a50.jpg': '7', 'images/28fe1291bf019109651723940887ed6ff1e1b4a60028a15b648aaa959d5b622c.jpg': '8', 'images/18cebe9c02b6160708591815b62a39d65034f0db42e6f9495510e4a203c... | {'4': 'images/0b55b386169f13d50b1b7ff47bfa61c9126516d2fca0fe9057685662016c9e22.jpg', '7': 'images/c0b3ca79844ea127d86dcdd3fcf3e95955edba6f7897f1c1e732e26c9b917a50.jpg', '8': 'images/28fe1291bf019109651723940887ed6ff1e1b4a60028a15b648aaa959d5b622c.jpg', '1': 'images/18cebe9c02b6160708591815b62a39d65034f0db42e6f9495510e4... | {} | {} | {} | ['Current LLMs prioritize natural language over code. Fig. 6b illustrates that distraction comments can mislead LLMs into accepting incorrect code as correct across all six systems studied. This indicates that the systems tend to prioritize comments over the actual code. In the example, the system detects an error in t... | 28348747626e6364ef8ed1d3cf3ae2a27e837a31c9e72c81cf34fc34a077ec92 | 5f4382c8b4eb16e5bc379f3c02f21f53318dbacb |
explanation | What evidence supports the claim of improved zero-shot generalization? | We respectfully disagree with the reviewer’s assertion that the paper does not demonstrate improved zero-shot generalization, as we show this in Procgen (see aggregate performance added to Table 3). Additionally, we present the FDD approach (Table 2), where we observe improvement in the generalization gap for the DMC e... | ['Table 2', 'Table 3', 'Table 1'] | ['images/5569c6a3bfa524742be607e0b74ea0c979027562b132ff607589531c164f2e8b.jpg', 'images/dcb7d6d85125826affdf8c728bcb27be4cf1096384a051e10fe381534f2d375b.jpg', 'images/4700ceaef17c3682b2201018d66e8a2e1c59985dcb94ee0a1293d3d2c28e41f8.jpg'] | ['table'] | 3 | 2 | 5 | {} | {} | {'images/d65db754ee4fe5be255f66580d4f92aa03a1107db5b556c3b4ea7d63b56fec34.jpg': '5', 'images/8759ff84202e49665b7630122fce5a4391e5fd728dd26bcf8bd79ba128b548f2.jpg': '2'} | {'5': 'images/d65db754ee4fe5be255f66580d4f92aa03a1107db5b556c3b4ea7d63b56fec34.jpg', '2': 'images/8759ff84202e49665b7630122fce5a4391e5fd728dd26bcf8bd79ba128b548f2.jpg'} | {'images/4700ceaef17c3682b2201018d66e8a2e1c59985dcb94ee0a1293d3d2c28e41f8.jpg': '1', 'images/5569c6a3bfa524742be607e0b74ea0c979027562b132ff607589531c164f2e8b.jpg': '2', 'images/dcb7d6d85125826affdf8c728bcb27be4cf1096384a051e10fe381534f2d375b.jpg': '3'} | {'1': 'images/4700ceaef17c3682b2201018d66e8a2e1c59985dcb94ee0a1293d3d2c28e41f8.jpg', '2': 'images/5569c6a3bfa524742be607e0b74ea0c979027562b132ff607589531c164f2e8b.jpg', '3': 'images/dcb7d6d85125826affdf8c728bcb27be4cf1096384a051e10fe381534f2d375b.jpg'} | {} | ['images/8759ff84202e49665b7630122fce5a4391e5fd728dd26bcf8bd79ba128b548f2.jpg', 'images/d65db754ee4fe5be255f66580d4f92aa03a1107db5b556c3b4ea7d63b56fec34.jpg'] | 5357a51d9c1a64e442ce83018c4e81ed44c53e736a443bd65b61b021ea85c150 | 67ffaaf503d82d0615454baf237f5e5a9ff7bb19 |
explanation | Do you have a proof that PolyReLU and PolyNorm have equivalent expressivity? | Thank you for pointing out the less precise expression. We have rephrased the sentence as follows: 'From Figure 1, one can see that the expressivity of PolyNorm is greater than or equal to that of PolyReLU.' The claim is primarily supported through the empirical evidence provided in the paper. As can be observed in Fig... | ['Figure 1', 'Figure 6', 'Figure 7'] | ['images/a4c46101de0b0f13b987de572c9324742705fcb26f894ba6c6254c285adddf1a.jpg', 'images/046ab70a2b4e254b1ece36680859bb7ab5fac1877cc75d8c41d950778c8f1046.jpg', 'images/b14e1904e8ec3fdee265107c7746e771c19d93de74f082fb6f52c4f54678406b.jpg'] | ['figure'] | 3 | 2 | 5 | {'Hyperparameters. Unless otherwise specified, we use a third-order PolyCom by default and initialize the coefficients as ai = 1/3 for i = 1, 2, 3 and set a0 = 0. Model weights are randomly initialized. For optimization, we apply the AdamW optimizer with β1 = 0.9 and β2 = 0.95. All models are trained on sequences of 40... | {'1': 'Hyperparameters. Unless otherwise specified, we use a third-order PolyCom by default and initialize the coefficients as ai = 1/3 for i = 1, 2, 3 and set a0 = 0. Model weights are randomly initialized. For optimization, we apply the AdamW optimizer with β1 = 0.9 and β2 = 0.95. All models are trained on sequences ... | {'images/b14e1904e8ec3fdee265107c7746e771c19d93de74f082fb6f52c4f54678406b.jpg': '7', 'images/046ab70a2b4e254b1ece36680859bb7ab5fac1877cc75d8c41d950778c8f1046.jpg': '6', 'images/824414a9a148b783330a35bbd312329fc253390c4f58a7711bcb9a1d90809da1.jpg': '2', 'images/a4c46101de0b0f13b987de572c9324742705fcb26f894ba6c6254c285ad... | {'7': 'images/b14e1904e8ec3fdee265107c7746e771c19d93de74f082fb6f52c4f54678406b.jpg', '6': 'images/046ab70a2b4e254b1ece36680859bb7ab5fac1877cc75d8c41d950778c8f1046.jpg', '2': 'images/824414a9a148b783330a35bbd312329fc253390c4f58a7711bcb9a1d90809da1.jpg', '1': 'images/a4c46101de0b0f13b987de572c9324742705fcb26f894ba6c6254c... | {} | {} | {} | ['images/824414a9a148b783330a35bbd312329fc253390c4f58a7711bcb9a1d90809da1.jpg', 'Hyperparameters. Unless otherwise specified, we use a third-order PolyCom by default and initialize the coefficients as ai = 1/3 for i = 1, 2, 3 and set a0 = 0. Model weights are randomly initialized. For optimization, we apply the AdamW o... | c1bc3c66ef0dee68fef185813dcc321a868969e1fce058e8db05d4896e37025c | 8b6c738aadc6b44e6ec8736d7e10c499122c0609 |
explanation | Include aforementioned key benchmarks to facilitate a more comprehensive comparison. | We provide additional performance comparisons with distillation sampling variant on CIFAR-10 (Table 1) and with direct consistency training variant on ImageNet 64 × 64 (Table 2). We have now included the key baselines [2], [3], [4] in Table 3 and Table 4. | ['Table 1', 'Table 2', 'Table 3', 'Table 4'] | ['images/4584bc1cab2b3666269c237bdbbd1b4df550e3959b8bb76bdede29a12727b351.jpg', 'images/a6254a44a961174af259338151f5d83522877672f091e94dc159e438aded5ddc.jpg', 'images/75f3909c72e81a021d776ae110c21cbef76c5af19d2a275c98cb87c82056d383.jpg', 'images/0c955ce83df71273494300492571aa486b8447724460975d37e40202ee8c8a1f.jpg'] | ['table'] | 4 | 1 | 5 | {'Instead of directly regressing on the ground truth vector field, Consistency-FM directly defines straight flows with consistent velocity that start from different times to the same endpoint. Specifically, we have the following lemma (prove in Appendix A.1): ': '1'} | {'1': 'Instead of directly regressing on the ground truth vector field, Consistency-FM directly defines straight flows with consistent velocity that start from different times to the same endpoint. Specifically, we have the following lemma (prove in Appendix A.1): '} | {} | {} | {'images/4584bc1cab2b3666269c237bdbbd1b4df550e3959b8bb76bdede29a12727b351.jpg': '1', 'images/75f3909c72e81a021d776ae110c21cbef76c5af19d2a275c98cb87c82056d383.jpg': '3', 'images/a6254a44a961174af259338151f5d83522877672f091e94dc159e438aded5ddc.jpg': '2', 'images/0c955ce83df71273494300492571aa486b8447724460975d37e40202ee8... | {'1': 'images/4584bc1cab2b3666269c237bdbbd1b4df550e3959b8bb76bdede29a12727b351.jpg', '3': 'images/75f3909c72e81a021d776ae110c21cbef76c5af19d2a275c98cb87c82056d383.jpg', '2': 'images/a6254a44a961174af259338151f5d83522877672f091e94dc159e438aded5ddc.jpg', '4': 'images/0c955ce83df71273494300492571aa486b8447724460975d37e402... | {} | ['Instead of directly regressing on the ground truth vector field, Consistency-FM directly defines straight flows with consistent velocity that start from different times to the same endpoint. Specifically, we have the following lemma (prove in Appendix A.1): '] | fd2f46c9e9ce065018261c79e4bf414a71abfae337f3faa2bf15a48fdd911f0c | 8c2ef55eef0d86e9d05bef581f26ff0fb739fa87 |
explanation | What are the reasons for the different performances of the unguided approach across various tasks? | The proposed distillation methods indeed have different effects in different tasks. The Table 2 corresponds to the scenario of zero-shot inference on large language models. In this case, to produce meaningful (not random) inference, the model capacity and training dataset need to be sufficiently large. As we often obse... | ['Table 1', 'Table 2', 'Table 3'] | ['images/83b6e86b10024de7152534b36aabdc49122020f75cebe1217ea2380354aff292.jpg', 'images/79766378db8c691eda1c3f51d27ae46996a8e59c5475c93a621738726d28df13.jpg', 'images/bd5358fe10a83b107dacdece7f6c535060a32b18ad102fc6b077d4c6375984b6.jpg'] | ['table'] | 3 | 2 | 5 | {'• Target Guided. We can directly transfer the parameters from the fine-tuned teacher target model and distill from it. More specifically, given the model inputs for training on a target classification task, we denote the hidden states from the student and the teacher target as H(s) and H(t), each with m vectors; outp... | {'1': '• Target Guided. We can directly transfer the parameters from the fine-tuned teacher target model and distill from it. More specifically, given the model inputs for training on a target classification task, we denote the hidden states from the student and the teacher target as H(s) and H(t), each with m vectors;... | {} | {} | {'images/83b6e86b10024de7152534b36aabdc49122020f75cebe1217ea2380354aff292.jpg': '1', 'images/79766378db8c691eda1c3f51d27ae46996a8e59c5475c93a621738726d28df13.jpg': '2', 'images/bd5358fe10a83b107dacdece7f6c535060a32b18ad102fc6b077d4c6375984b6.jpg': '3'} | {'1': 'images/83b6e86b10024de7152534b36aabdc49122020f75cebe1217ea2380354aff292.jpg', '2': 'images/79766378db8c691eda1c3f51d27ae46996a8e59c5475c93a621738726d28df13.jpg', '3': 'images/bd5358fe10a83b107dacdece7f6c535060a32b18ad102fc6b077d4c6375984b6.jpg'} | {} | ['• Target Guided. We can directly transfer the parameters from the fine-tuned teacher target model and distill from it. More specifically, given the model inputs for training on a target classification task, we denote the hidden states from the student and the teacher target as H(s) and H(t), each with m vectors; outp... | 4a4b0196466c6c22db5b60d2b3f0218bd1a1b5721c5c8290e83a3e171768f2c5 | 91bbf564af0c392bf3d0152e8ff6b20e5a1f211f |
explanation | How is the last-demonstration clustering supported by evidence? | While we acknowledge that last-demonstration clustering may appear less pronounced in some visualizations, multiple lines of evidence still support its existence: Figure 3a shows elevated percentage frequencies for last demonstrations compared to middle positions, Figure 3b demonstrates higher partial derivative norms ... | ['Figure 3', 'Figure 5'] | ['images/fabe8c971816529b4c874def50c0f2e100520e70af95a726acf1450e01eea639.jpg', 'images/fe074d7e6b9aab3e309f6ad1ffdc5778d949aecc4bf0867ae31b1e7e1ffc94eb.jpg'] | ['figure'] | 2 | 3 | 5 | {'We prepare 100 randomized prompts and compute the partial derivative norms similarly to Section 3.2. To ensure the prompts are differently distributed to training ones, we build each prompt as a sequence of 50 to 100 random words, resulting in meaningless sentences. For each prompt, we compute its chunk partial deriv... | {'1': 'We prepare 100 randomized prompts and compute the partial derivative norms similarly to Section 3.2. To ensure the prompts are differently distributed to training ones, we build each prompt as a sequence of 50 to 100 random words, resulting in meaningless sentences. For each prompt, we compute its chunk partial ... | {'images/ec3135d1cfef854bb75e4265222eba50ed2b0ed0d52b35742ddda3078c21d394.jpg': '2', 'images/fabe8c971816529b4c874def50c0f2e100520e70af95a726acf1450e01eea639.jpg': '3', 'images/72e917b6bbc2426132b4a78754ac69c62ff71dbba1420f390b68497c5bb6d90e.jpg': '6', 'images/fe074d7e6b9aab3e309f6ad1ffdc5778d949aecc4bf0867ae31b1e7e1ff... | {'2': 'images/ec3135d1cfef854bb75e4265222eba50ed2b0ed0d52b35742ddda3078c21d394.jpg', '3': 'images/fabe8c971816529b4c874def50c0f2e100520e70af95a726acf1450e01eea639.jpg', '6': 'images/72e917b6bbc2426132b4a78754ac69c62ff71dbba1420f390b68497c5bb6d90e.jpg', '5': 'images/fe074d7e6b9aab3e309f6ad1ffdc5778d949aecc4bf0867ae31b1e... | {} | {} | {} | ['images/72e917b6bbc2426132b4a78754ac69c62ff71dbba1420f390b68497c5bb6d90e.jpg', 'images/ec3135d1cfef854bb75e4265222eba50ed2b0ed0d52b35742ddda3078c21d394.jpg', 'We prepare 100 randomized prompts and compute the partial derivative norms similarly to Section 3.2. To ensure the prompts are differently distributed to traini... | 2ce088d4fbe67d6821915184112e5defdd7a3bcfcfe7b9ce6b34a0b27eb435bc | b8fc178ed7dc8207c662d4ba992e64d9a28fc8ee |
explanation | Does the method work with real-world images? | Our work works well with real-world images (see the first two samples of Figure 4, all three samples of Figure 5, first two samples of Figure 6, and all the samples of Figure 7). | ['Figure 4', 'Figure 5', 'Figure 6', 'Figure 7'] | ['images/9b75a55929abeeee0f970442e7358f841aba7019bab5cdac23752b0c2ed34f32.jpg', 'images/ece6ed302a7295cf3813537e94d26a34f41157b56cd355d789ada87b42bac8ea.jpg', 'images/4894814557a8f53513b6310e3ee6de20a59c21ba809ee7c937ed9716f8450cb1.jpg', 'images/4302891effddf7411089361e222e4f6d69e2c5ade47430b9764184817e08b1a6.jpg'] | ['figure'] | 4 | 1 | 5 | {} | {} | {'images/4894814557a8f53513b6310e3ee6de20a59c21ba809ee7c937ed9716f8450cb1.jpg': '6', 'images/c4510071d04ee16399d200e62fee65a7c0007882c555daff9368e23ae77f2c23.jpg': '3', 'images/4302891effddf7411089361e222e4f6d69e2c5ade47430b9764184817e08b1a6.jpg': '7', 'images/ece6ed302a7295cf3813537e94d26a34f41157b56cd355d789ada87b42b... | {'6': 'images/4894814557a8f53513b6310e3ee6de20a59c21ba809ee7c937ed9716f8450cb1.jpg', '3': 'images/c4510071d04ee16399d200e62fee65a7c0007882c555daff9368e23ae77f2c23.jpg', '7': 'images/4302891effddf7411089361e222e4f6d69e2c5ade47430b9764184817e08b1a6.jpg', '5': 'images/ece6ed302a7295cf3813537e94d26a34f41157b56cd355d789ada8... | {} | {} | {} | ['images/c4510071d04ee16399d200e62fee65a7c0007882c555daff9368e23ae77f2c23.jpg'] | b607fe6e943eefd103b89382aad8a02304a8098893da7dd0d8060f6c1189ad21 | dc4965f7e90b8b1f74b0f2cf392194fdb07ae1ab |
explanation | What are the reasons for the marginal accuracy improvements observed in the ablation studies? | For the performance improvement of the model, it is important to highlight that many of the baselines we selected are recent and highly competitive models, making accuracy improvements both challenging and meaningful. Regarding the relatively marginal improvements observed in the ablation studies, this is because we co... | ['Table 3', 'Table 4', 'Table 5', 'Table 6'] | ['images/aa351033b8cf75db6e07454ead56e7b665fae04b673800e8f2558e4d54cef916.jpg', 'images/1f1078f753f79ee4153f96eee248fd1500d4c26e7a555df9cf2f32b3f99ab65b.jpg', 'images/9309bdd10a5e15d03da92fbb58615df58674e8dbbcf7250ca71335513ecb2cba.jpg', 'images/51c306a362a485c4852f716f2bfcf255a948e69e522101b7a27c655d91600da9.jpg'] | ['table'] | 4 | 1 | 5 | {'After passing through n layers of the PSformer Encoder, the final output is Xpred = XoutW F , where Xpred ∈RM×F , and W F ∈RL×F is a linear mapping, where F is the prediction length. The Xpred is the final output of the PSformer model. The PSformer structure does not use positional encoding, as the segment attention ... | {'1': 'After passing through n layers of the PSformer Encoder, the final output is Xpred = XoutW F , where Xpred ∈RM×F , and W F ∈RL×F is a linear mapping, where F is the prediction length. The Xpred is the final output of the PSformer model. The PSformer structure does not use positional encoding, as the segment atten... | {} | {} | {'images/51c306a362a485c4852f716f2bfcf255a948e69e522101b7a27c655d91600da9.jpg': '6', 'images/aa351033b8cf75db6e07454ead56e7b665fae04b673800e8f2558e4d54cef916.jpg': '3', 'images/1f1078f753f79ee4153f96eee248fd1500d4c26e7a555df9cf2f32b3f99ab65b.jpg': '4', 'images/9309bdd10a5e15d03da92fbb58615df58674e8dbbcf7250ca71335513ec... | {'6': 'images/51c306a362a485c4852f716f2bfcf255a948e69e522101b7a27c655d91600da9.jpg', '3': 'images/aa351033b8cf75db6e07454ead56e7b665fae04b673800e8f2558e4d54cef916.jpg', '4': 'images/1f1078f753f79ee4153f96eee248fd1500d4c26e7a555df9cf2f32b3f99ab65b.jpg', '5': 'images/9309bdd10a5e15d03da92fbb58615df58674e8dbbcf7250ca71335... | {} | ['After passing through n layers of the PSformer Encoder, the final output is Xpred = XoutW F , where Xpred ∈RM×F , and W F ∈RL×F is a linear mapping, where F is the prediction length. The Xpred is the final output of the PSformer model. The PSformer structure does not use positional encoding, as the segment attention ... | 8abcf8cbc84e27faa2a3473349a878f22d2e9d285585994ea6deb78140bf8142 | e69a59c151ec85e9a7265a99a50bc763aa6cf326 |
explanation | What is the motivation for introducing an uncertainty-aware exploration strategy? | We have updated the abstract to clarify the motivation of our work. As further elaborated in the introduction, most existing methods treat recommendation as a static process, which prevents them from effectively accounting for users’ evolving preferences. Sequential recommendation methods address this limitation to som... | ['Figure 1', 'Table 1'] | ['images/fa5fccc5987c17ab6cac4b5db3a07f8ec93a37436f6e558577aba21b8cbbcf4f.jpg', 'images/9b094692d407a6efcb89998338d3da6e042001fee571511e0a247c7e41638bc3.jpg'] | ['mixed'] | 2 | 3 | 5 | {'where ratingu,i is the user assigned rating, τ is the threshold to identify if a user provided rating is positive. Evidential reward aggregates the recommended items’ rating as a traditional reward r balanced with their vacuity predictions as a measure of information gain, denoted as an uncertainty regularizer R. Dur... | {'1': 'where ratingu,i is the user assigned rating, τ is the threshold to identify if a user provided rating is positive. Evidential reward aggregates the recommended items’ rating as a traditional reward r balanced with their vacuity predictions as a measure of information gain, denoted as an uncertainty regularizer R... | {'images/df2c9e1253936ea04152414231d474bf9ca9030048f1ea7acdaa739044d37396.jpg': '2', 'images/fa5fccc5987c17ab6cac4b5db3a07f8ec93a37436f6e558577aba21b8cbbcf4f.jpg': '1'} | {'2': 'images/df2c9e1253936ea04152414231d474bf9ca9030048f1ea7acdaa739044d37396.jpg', '1': 'images/fa5fccc5987c17ab6cac4b5db3a07f8ec93a37436f6e558577aba21b8cbbcf4f.jpg'} | {'images/0d9baf01287057b51e00bb8299a7e6f498abf529debe3d15976263037e9bbbb1.jpg': '3', 'images/9b094692d407a6efcb89998338d3da6e042001fee571511e0a247c7e41638bc3.jpg': '1'} | {'3': 'images/0d9baf01287057b51e00bb8299a7e6f498abf529debe3d15976263037e9bbbb1.jpg', '1': 'images/9b094692d407a6efcb89998338d3da6e042001fee571511e0a247c7e41638bc3.jpg'} | {} | ['images/df2c9e1253936ea04152414231d474bf9ca9030048f1ea7acdaa739044d37396.jpg', 'images/0d9baf01287057b51e00bb8299a7e6f498abf529debe3d15976263037e9bbbb1.jpg', 'where ratingu,i is the user assigned rating, τ is the threshold to identify if a user provided rating is positive. Evidential reward aggregates the recommended ... | a833561f0ef7484e750c82f53cfc0766535b7e1c1697d5c3a86b2caa0fa0ec11 | 01bc18d9733b34622eff9efd4422fca8f18b069c |
explanation | On tasks where the model already performs well, does C&P fine-tuning lead to a decline in performance? | According to Table 6, InternVL2-2B and InternVL2-8B show minor declines on a few datasets where they originally performed well. We attribute this to the possibility that both cognitive and perceptual responses may occasionally fail simultaneously while maintaining consistency, as illustrated in Figure 4a. However, our ... | ['Table 6', 'Figure 4'] | ['images/bcb639e76fd6dc9fbd9ce9c46b15d6202f6959795d950b3d155df1b78c839daf.jpg', 'images/cc3acc6bad6037c01587df84ec064bd17d72732e9e58af6558ab04c50b980ae2.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Table 2 shows the evaluation results. Overall, closed-source models have higher C&P consistency compared to open-source models. Qwen-VL-Max achieves the highest C&P consistency at 79.98%, followed by GPT-4o at 68.60%. Among the open-source models, Qwen-VL-Chat demonstrates the ': '1', 'Notably, OCR annotations are re... | {'1': 'Table 2 shows the evaluation results. Overall, closed-source models have higher C&P consistency compared to open-source models. Qwen-VL-Max achieves the highest C&P consistency at 79.98%, followed by GPT-4o at 68.60%. Among the open-source models, Qwen-VL-Chat demonstrates the ', '2': 'Notably, OCR annotations a... | {'images/cc3acc6bad6037c01587df84ec064bd17d72732e9e58af6558ab04c50b980ae2.jpg': '4'} | {'4': 'images/cc3acc6bad6037c01587df84ec064bd17d72732e9e58af6558ab04c50b980ae2.jpg'} | {'images/bcb639e76fd6dc9fbd9ce9c46b15d6202f6959795d950b3d155df1b78c839daf.jpg': '6', 'images/2947bf84e1016e8842d69621a930d869f093ca5b459b699f7a73c36a6b6fc8f9.jpg': '4'} | {'6': 'images/bcb639e76fd6dc9fbd9ce9c46b15d6202f6959795d950b3d155df1b78c839daf.jpg', '4': 'images/2947bf84e1016e8842d69621a930d869f093ca5b459b699f7a73c36a6b6fc8f9.jpg'} | {} | ['Table 2 shows the evaluation results. Overall, closed-source models have higher C&P consistency compared to open-source models. Qwen-VL-Max achieves the highest C&P consistency at 79.98%, followed by GPT-4o at 68.60%. Among the open-source models, Qwen-VL-Chat demonstrates the ', 'images/2947bf84e1016e8842d69621a930d... | 20fe7fb2e826aaf4eb4a1389904161ab54b16c90753811caa2ca465b23ab243b | 08af6e3bbee2dba7d63f9faef1d3963bebb02a2c |
explanation | What is the relationship between N, n, and m? | In Table 2, N = m = n, where m and n represent the sample sizes for each of the two distributions being tested. In Figure 5, however, N = m + n, which represents the total sample size received for the experiment. | ['Table 2', 'Figure 5'] | ['images/2950f7b20dfe365b5c28b1ed08c83ceb608b33783477a621623b59347cc6318d.jpg', 'images/692e64130ff3aac0cde8b87c3679d8fedebd43fb50805bb63a333a330a465d76.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Theorem 5.1. (Lopez-Paz & Oquab, 2018b) Let f ′ ∈Cϕ : X →{0, 1} be the SSL-C2ST classifier model. Let H0 : t = 1 and H1 : t = 1 −ϵ(P, Q; f ′), where t is the test accuracy and ϵ(P, Q; f ′) = Pr(zi,li)∼D [f ′(zi) ̸= li] /2 ∈ 0, 21 represents the inability of f ′ to distinguish between P and Q. The test power of tˆ is:... | {'1': 'Theorem 5.1. (Lopez-Paz & Oquab, 2018b) Let f ′ ∈Cϕ : X →{0, 1} be the SSL-C2ST classifier model. Let H0 : t = 1 and H1 : t = 1 −ϵ(P, Q; f ′), where t is the test accuracy and ϵ(P, Q; f ′) = Pr(zi,li)∼D [f ′(zi) ̸= li] /2 ∈ 0, 21 represents the inability of f ′ to distinguish between P and Q. The test power of t... | {'images/692e64130ff3aac0cde8b87c3679d8fedebd43fb50805bb63a333a330a465d76.jpg': '5'} | {'5': 'images/692e64130ff3aac0cde8b87c3679d8fedebd43fb50805bb63a333a330a465d76.jpg'} | {'images/2950f7b20dfe365b5c28b1ed08c83ceb608b33783477a621623b59347cc6318d.jpg': '2'} | {'2': 'images/2950f7b20dfe365b5c28b1ed08c83ceb608b33783477a621623b59347cc6318d.jpg'} | {} | ['Theorem 5.1. (Lopez-Paz & Oquab, 2018b) Let f ′ ∈Cϕ : X →{0, 1} be the SSL-C2ST classifier model. Let H0 : t = 1 and H1 : t = 1 −ϵ(P, Q; f ′), where t is the test accuracy and ϵ(P, Q; f ′) = Pr(zi,li)∼D [f ′(zi) ̸= li] /2 ∈ 0, 21 represents the inability of f ′ to distinguish between P and Q. The test power of tˆ is:... | 57d8aa3edeb2dbd5843784fdc93d50eda986568b887c34b5165e185e1aced37e | 117a7d1efe9b6cebaba614db86e709185420d408 |
explanation | How does the addition of cross-modal data impact the performance of your model? | We have conducted extensive experiments and ablation studies to demonstrate the benefits of adding cross-modal data under the same model structure and token budget. The results are presented in Table 1 and Figure 6. | ['Table 1', 'Figure 6'] | ['images/c33c24982b8e0230e8525a7adde26a5543eaa8420130b670cf13a875e8704f17.jpg', 'images/102bf6a906a8851cfdd1c79ca2ea69ab3c198ac7f4a8e6fd88b5466602dfdeab.jpg'] | ['mixed'] | 2 | 3 | 5 | {'BSM employs a single-nucleotide tokenizer with a vocabulary that includes nucleotides, amino acids, and special tokens. It uses an autoregressive architecture to model biological sequences such as genes and proteins. By learning next-token prediction, the model reasons over sequences causally and captures statistical... | {'1': 'BSM employs a single-nucleotide tokenizer with a vocabulary that includes nucleotides, amino acids, and special tokens. It uses an autoregressive architecture to model biological sequences such as genes and proteins. By learning next-token prediction, the model reasons over sequences causally and captures statis... | {'images/2a282d81483ab9546bf759391883e174ba84c87d74eabaa34c26ce02fb1b988f.jpg': '5', 'images/102bf6a906a8851cfdd1c79ca2ea69ab3c198ac7f4a8e6fd88b5466602dfdeab.jpg': '6', 'images/39752a43862b401b88e5097b689efb2d95d26e7802d6f768274cf3b18def9b76.jpg': '3'} | {'5': 'images/2a282d81483ab9546bf759391883e174ba84c87d74eabaa34c26ce02fb1b988f.jpg', '6': 'images/102bf6a906a8851cfdd1c79ca2ea69ab3c198ac7f4a8e6fd88b5466602dfdeab.jpg', '3': 'images/39752a43862b401b88e5097b689efb2d95d26e7802d6f768274cf3b18def9b76.jpg'} | {'images/c33c24982b8e0230e8525a7adde26a5543eaa8420130b670cf13a875e8704f17.jpg': '1'} | {'1': 'images/c33c24982b8e0230e8525a7adde26a5543eaa8420130b670cf13a875e8704f17.jpg'} | {} | ['images/2a282d81483ab9546bf759391883e174ba84c87d74eabaa34c26ce02fb1b988f.jpg', 'images/39752a43862b401b88e5097b689efb2d95d26e7802d6f768274cf3b18def9b76.jpg', 'BSM employs a single-nucleotide tokenizer with a vocabulary that includes nucleotides, amino acids, and special tokens. It uses an autoregressive architecture t... | f7db81ab5514b0fd0666d31b7dfd586921d981fa3d40fec604ea5fb3d76b12be | 24fd5d6b134b0c6def366de2ca6cae4543e39f62 |
explanation | How does the proposed model handle large deformations in medical images? | Our new draft currently extends Table 1 with a full rigid transformation setting including all 3 transformations: rotation, scaling, and translation. However, we would like to point out that across all settings of Experiment 1, we apply Brownian noise deformation at multiple scales to ensure the synthetic transformatio... | ['Table 1', 'Figure 3'] | ['images/bd332f00b08d46c3fe079807993e810711ef42010efacdfaf9ef76c4f2dfb014.jpg', 'images/416fa5fe6a345128a602cf05f287b6bb8b06c438dbb519062a04f760b6c7a49e.jpg'] | ['mixed'] | 2 | 3 | 5 | {'In this section, we first formally establish the limitations imposed on deformable image registration by the grid constraints of Eulerian frameworks. Afterwards, we establish a Lagrangian formulation that does not make any grid assumptions (section 2.1). Within this context, we highlight the advantages offered by geo... | {'1': 'In this section, we first formally establish the limitations imposed on deformable image registration by the grid constraints of Eulerian frameworks. Afterwards, we establish a Lagrangian formulation that does not make any grid assumptions (section 2.1). Within this context, we highlight the advantages offered b... | {'images/416fa5fe6a345128a602cf05f287b6bb8b06c438dbb519062a04f760b6c7a49e.jpg': '3'} | {'3': 'images/416fa5fe6a345128a602cf05f287b6bb8b06c438dbb519062a04f760b6c7a49e.jpg'} | {'images/bd332f00b08d46c3fe079807993e810711ef42010efacdfaf9ef76c4f2dfb014.jpg': '1', 'images/72348eec94135ab7a03b205acb09c70e2e7df45331db3948baf5b1e8224a4a18.jpg': '2'} | {'1': 'images/bd332f00b08d46c3fe079807993e810711ef42010efacdfaf9ef76c4f2dfb014.jpg', '2': 'images/72348eec94135ab7a03b205acb09c70e2e7df45331db3948baf5b1e8224a4a18.jpg'} | {} | ['images/72348eec94135ab7a03b205acb09c70e2e7df45331db3948baf5b1e8224a4a18.jpg', 'A common necessary preprocessing technique employed to mitigate this issue involves an exhaustive search for an initial affine alignment. This reduces the degrees of freedom in the transformation parameters by guaranteeing that similar fea... | a6b72d9a6bc04b0c1dffad81c4bc17a49f27acd5641db79d0e693ac20938121e | 2e71063092065f2b211c52664560426b1e04c5ef |
explanation | How does the CoTFormer model compare to the standard Transformer in terms of performance? | The accuracy of the standard Transformer in Table 1 can indicate the distance between the CoTFormer and the standard Transformer. Therefore, it is necessary to add the standard Transformer to Figure 2. | ['Table 1', 'Figure 2'] | ['images/bd79fb6eebc5aefb653b4b480e9e9c98751350fed8a08da40c192caa576035f5.jpg', 'images/78d36fde1a32e35714e8df05588902cacc90cd989890adaa24267f1cafab50a9.jpg'] | ['mixed'] | 2 | 3 | 5 | {} | {} | {'images/e3201107214ce7a424118b6fd025aea53e7dfe9577b425b4c9c7787ead0069ae.jpg': '4', 'images/5a7031f87b40f338004cf846e40e18025b7e98d528cdd79cdfebd6680fee6792.jpg': '3', 'images/f2bd897de74379c2b125865a2fdc18f79b9187744fe9c4c27dc0fe5afae18fdc.jpg': '5', 'images/78d36fde1a32e35714e8df05588902cacc90cd989890adaa24267f1cafa... | {'4': 'images/e3201107214ce7a424118b6fd025aea53e7dfe9577b425b4c9c7787ead0069ae.jpg', '3': 'images/5a7031f87b40f338004cf846e40e18025b7e98d528cdd79cdfebd6680fee6792.jpg', '5': 'images/f2bd897de74379c2b125865a2fdc18f79b9187744fe9c4c27dc0fe5afae18fdc.jpg', '2': 'images/78d36fde1a32e35714e8df05588902cacc90cd989890adaa24267f... | {'images/bd79fb6eebc5aefb653b4b480e9e9c98751350fed8a08da40c192caa576035f5.jpg': '1'} | {'1': 'images/bd79fb6eebc5aefb653b4b480e9e9c98751350fed8a08da40c192caa576035f5.jpg'} | {} | ['images/e3201107214ce7a424118b6fd025aea53e7dfe9577b425b4c9c7787ead0069ae.jpg', 'images/5a7031f87b40f338004cf846e40e18025b7e98d528cdd79cdfebd6680fee6792.jpg', 'images/f2bd897de74379c2b125865a2fdc18f79b9187744fe9c4c27dc0fe5afae18fdc.jpg'] | 6998e59fbcab22b1bc6ee609d88666efa0ea344c3bc37844ea2e058426bcfe0c | 3a439959ac98f4b2f52116ae11b370605e09b606 |
explanation | What are the performance differences between the SSF and MSF strategies? | First, we present the SSF and MSF visualization comparison in Figure 2. The SSF has a single change, while the MSF has a variety of changes. Second, in Table 6 we perform ablation experiments of SSF and MSF on segmentation, and we analyze why MSF is more suitable for segmentation. | ['Figure 2', 'Table 6'] | ['images/260264af09a8f3445bbdd80fdeec2b07693b431df57ccf3eae6333d168781a3a.jpg', 'images/5ae993f2b704b12e16b72c4e9ac2a9756bf3c36653746a6f51959b728caed000.jpg'] | ['mixed'] | 2 | 3 | 5 | {'To simulate the distortion and deformation of an object, we have chosen to use the Sine function as our residual function. The inherent periodic nature of the Sine function allows us to adjust the number of regions that are deformed with precision. Additionally, by manipulating the amplitude of the Sine function, we ... | {'1': 'To simulate the distortion and deformation of an object, we have chosen to use the Sine function as our residual function. The inherent periodic nature of the Sine function allows us to adjust the number of regions that are deformed with precision. Additionally, by manipulating the amplitude of the Sine function... | {'images/01b46f660d690ae0f356567e49caf8b198e9bc41fea3c545d5a72c54bbc6bcd6.jpg': '4', 'images/260264af09a8f3445bbdd80fdeec2b07693b431df57ccf3eae6333d168781a3a.jpg': '2'} | {'4': 'images/01b46f660d690ae0f356567e49caf8b198e9bc41fea3c545d5a72c54bbc6bcd6.jpg', '2': 'images/260264af09a8f3445bbdd80fdeec2b07693b431df57ccf3eae6333d168781a3a.jpg'} | {'images/5ae993f2b704b12e16b72c4e9ac2a9756bf3c36653746a6f51959b728caed000.jpg': '6', 'images/8cc72a64880c6c21759991d0f88f1ec620fd16727c49450e5f7a67b51eb99754.jpg': '5'} | {'6': 'images/5ae993f2b704b12e16b72c4e9ac2a9756bf3c36653746a6f51959b728caed000.jpg', '5': 'images/8cc72a64880c6c21759991d0f88f1ec620fd16727c49450e5f7a67b51eb99754.jpg'} | {} | ['images/8cc72a64880c6c21759991d0f88f1ec620fd16727c49450e5f7a67b51eb99754.jpg', 'To simulate the distortion and deformation of an object, we have chosen to use the Sine function as our residual function. The inherent periodic nature of the Sine function allows us to adjust the number of regions that are deformed with p... | 915d3f9f1702d60c5e98d2340e38873dd76632da2b5d1e3e1d7a9dfb85c2f5fc | 3b7721717f4d4bb039675982f8604ef8379258d5 |
explanation | How does the GSA-R2R dataset address the diversity of real-world environments? | We have made significant efforts to expand the diversity of GSA-R2R to include 20 distinct scene types, compared to just six in R2R. This diversity covers a wide range of daily scenarios and exceeds that of existing embodied navigation datasets, as highlighted in Table 1 of our paper. We already include multiple commer... | ['Table 1', 'Figure 2'] | ['images/52bf352cbd52ddb91e50272965f8dfd54170eea96c743cb3adf62eba877558ce.jpg', 'images/6178eda5ffcafe2b6b73084fd1941e5d713fc12a75f8318278a58a5aedf8cf64.jpg'] | ['mixed'] | 2 | 3 | 5 | {} | {} | {'images/1a425bbe2763a3120894c3389ccec7ee600b5454cb5de1118f2041dea2aabfeb.jpg': '1', 'images/34842d927b45d096db0f8485a57e2098bd1596a0bd7265f2e4fd1f7720206aaa.jpg': '4', 'images/6178eda5ffcafe2b6b73084fd1941e5d713fc12a75f8318278a58a5aedf8cf64.jpg': '2'} | {'1': 'images/1a425bbe2763a3120894c3389ccec7ee600b5454cb5de1118f2041dea2aabfeb.jpg', '4': 'images/34842d927b45d096db0f8485a57e2098bd1596a0bd7265f2e4fd1f7720206aaa.jpg', '2': 'images/6178eda5ffcafe2b6b73084fd1941e5d713fc12a75f8318278a58a5aedf8cf64.jpg'} | {'images/52bf352cbd52ddb91e50272965f8dfd54170eea96c743cb3adf62eba877558ce.jpg': '1', 'images/d88d1a88a655df45e8d41933a6a1b701c4ac4d7240316f9d40b38df2b625399c.jpg': '4'} | {'1': 'images/52bf352cbd52ddb91e50272965f8dfd54170eea96c743cb3adf62eba877558ce.jpg', '4': 'images/d88d1a88a655df45e8d41933a6a1b701c4ac4d7240316f9d40b38df2b625399c.jpg'} | {} | ['images/1a425bbe2763a3120894c3389ccec7ee600b5454cb5de1118f2041dea2aabfeb.jpg', 'images/d88d1a88a655df45e8d41933a6a1b701c4ac4d7240316f9d40b38df2b625399c.jpg', 'images/34842d927b45d096db0f8485a57e2098bd1596a0bd7265f2e4fd1f7720206aaa.jpg'] | 79321511912b2964f578557dbd5b0e3962b310f5fe14ce7b8b3ecb7cee6bd556 | 466366db3c29af46db9db97a71f1c21c2940ea95 |
explanation | What is the exact computational time/cost for the proposed method compared to existing MetaBBO methods? | We have demonstrated in the experiments (Figure 3, zero-shot performance) that the trained NeurELA can be seamlessly integrated into existing MetaBBO methods to provide effective dynamic landscape analysis, without further re-training. We also provide the inference wall time comparison in Table 1 to compare the computa... | ['Figure 3', 'Table 1'] | ['images/404469c60be80871de0a0cac273007fcc1f18dfb0d7cdc107fc8c79a31f770b5.jpg', 'images/2860d185ad0551afe2aabb501992df2d0b7f46bca5e5e298a5678d318f671126.jpg'] | ['mixed'] | 2 | 3 | 5 | {'PIE. PIE normalizes observation ot using two min-max normalization operations: first on the candidate solutions {Xit}im=1 against the search range, and second on the objective values {yit}im=1 using the extremum values at time step t. This ensures unified representation and generalization by scaling all values to [0,... | {'1': 'PIE. PIE normalizes observation ot using two min-max normalization operations: first on the candidate solutions {Xit}im=1 against the search range, and second on the objective values {yit}im=1 using the extremum values at time step t. This ensures unified representation and generalization by scaling all values t... | {'images/404469c60be80871de0a0cac273007fcc1f18dfb0d7cdc107fc8c79a31f770b5.jpg': '3', 'images/1ea8a4c9f98bd3c072369dd6b23ed6a0b0386c676b238e2f01bc9429d0b2366e.jpg': '2'} | {'3': 'images/404469c60be80871de0a0cac273007fcc1f18dfb0d7cdc107fc8c79a31f770b5.jpg', '2': 'images/1ea8a4c9f98bd3c072369dd6b23ed6a0b0386c676b238e2f01bc9429d0b2366e.jpg'} | {'images/2860d185ad0551afe2aabb501992df2d0b7f46bca5e5e298a5678d318f671126.jpg': '1'} | {'1': 'images/2860d185ad0551afe2aabb501992df2d0b7f46bca5e5e298a5678d318f671126.jpg'} | {} | ['Model Complexity (RQ6). We discuss the relationship between the model complexity and the zero-shot performance (unseen MetaBBO algorithm & problem sets) of our NeurELA. Concretely, We pre-train NeurELA under 6 different model complexities, with various hidden dimensions, i.e., h = (16, 64), and the number of the Ts-A... | 33fa994b4d9460e0a41f23d63db78bbfe1e1a6b0222ca6e28c6ce212fffeef2c | 52338e0fa95ec6a5e01a939a36c8daed3211c494 |
explanation | What MARL settings are presented in the paper? | The MARL settings CooperativePong, PistonBall and Spread are presented in Table 1 and Figure 3. | ['Table 1', 'Figure 3'] | ['images/e326b6cd699b65230781ca064b7dc8e0c74518769469a10020f910ccf56ffa86.jpg', 'images/ff57fad4a640245576a93aca4d413d4fd042bfebf97ab72e198abc6cf0568753.jpg'] | ['mixed'] | 2 | 3 | 5 | {'The training plots for multi-agent environments are shown in Figure 3, following the same methodology. To further compare different scenarios, we allow both agents in CooperativePong to share the same policy. While in PistonBall and Spread, only the controller is centralized, and each of the actors—20 in PistonBall a... | {'1': 'The training plots for multi-agent environments are shown in Figure 3, following the same methodology. To further compare different scenarios, we allow both agents in CooperativePong to share the same policy. While in PistonBall and Spread, only the controller is centralized, and each of the actors—20 in PistonB... | {'images/d4bc4d7f26d85b616d283efaa11b51d547720393d059af8460c4943bbf79f3b0.jpg': '2', 'images/ff57fad4a640245576a93aca4d413d4fd042bfebf97ab72e198abc6cf0568753.jpg': '3'} | {'2': 'images/d4bc4d7f26d85b616d283efaa11b51d547720393d059af8460c4943bbf79f3b0.jpg', '3': 'images/ff57fad4a640245576a93aca4d413d4fd042bfebf97ab72e198abc6cf0568753.jpg'} | {'images/68745102ec79efac81ef48cbfa782ed2d3970ee106e08ed4f94f4daa3f353f7c.jpg': '2', 'images/e326b6cd699b65230781ca064b7dc8e0c74518769469a10020f910ccf56ffa86.jpg': '1'} | {'2': 'images/68745102ec79efac81ef48cbfa782ed2d3970ee106e08ed4f94f4daa3f353f7c.jpg', '1': 'images/e326b6cd699b65230781ca064b7dc8e0c74518769469a10020f910ccf56ffa86.jpg'} | {} | ['images/d4bc4d7f26d85b616d283efaa11b51d547720393d059af8460c4943bbf79f3b0.jpg', 'images/68745102ec79efac81ef48cbfa782ed2d3970ee106e08ed4f94f4daa3f353f7c.jpg', 'The training plots for multi-agent environments are shown in Figure 3, following the same methodology. To further compare different scenarios, we allow both age... | f48df9d51e3796924fa36c31d59c5ac5c95533c249bddb76dbb0895ec9726c7a | 52654c7bcc7ede0930ec2ee1e88ac24f1c68621d |
explanation | How does the proposed approach compare against non-equivariant policy learning algorithms? | We directly compare our proposed approach against non-equivariant policy learning algorithms. The non-equivariant baselines perform much worse in terms of performance and sample efficiency (see 'Sideview NonEqui' Figure 5 and Table 1). The non-equivariant methods were trained with data augmentation and still underperfo... | ['Figure 5', 'Table 1'] | ['images/533dcc4ba8374a381b12f6e0a58fc2d7cbb9eb7bbeabfd7dd0bd4b95581ab8e3.jpg', 'images/97e8c25df23ebf0bb39ff2c1446d1262167f67bb3b1216035a2576da9c25530f.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Wang et al. (2022b) showed that equivariant networks can still be effective when there is some mismatch between the symmetry group used to constrain the model and the physically accurate task symmetry. Specifically, they found that using image rotations on sideview images to capture O(2) actions on the scene is bette... | {'1': 'Wang et al. (2022b) showed that equivariant networks can still be effective when there is some mismatch between the symmetry group used to constrain the model and the physically accurate task symmetry. Specifically, they found that using image rotations on sideview images to capture O(2) actions on the scene is ... | {'images/76f7ce360706c044c9d50d7488012c66e2d5866297e6a285cf8aa7ed4ec994a1.jpg': '7', 'images/533dcc4ba8374a381b12f6e0a58fc2d7cbb9eb7bbeabfd7dd0bd4b95581ab8e3.jpg': '5'} | {'7': 'images/76f7ce360706c044c9d50d7488012c66e2d5866297e6a285cf8aa7ed4ec994a1.jpg', '5': 'images/533dcc4ba8374a381b12f6e0a58fc2d7cbb9eb7bbeabfd7dd0bd4b95581ab8e3.jpg'} | {'images/97e8c25df23ebf0bb39ff2c1446d1262167f67bb3b1216035a2576da9c25530f.jpg': '1'} | {'1': 'images/97e8c25df23ebf0bb39ff2c1446d1262167f67bb3b1216035a2576da9c25530f.jpg'} | {} | ['images/76f7ce360706c044c9d50d7488012c66e2d5866297e6a285cf8aa7ed4ec994a1.jpg', 'Learning Latent or Approximate Symmetry For some learning problems, there could be a mismatch between the symmetry in the ground truth function and the symmetry in the equivariant network because the symmetry cannot be easily described in ... | cb4f90d46d84bcfa362631838e00cf9d04f56acb8f689fa61189b9993a63f821 | 557f8e7f27e42c5b8fa4a32df0e28d72280ab64b |
explanation | Are there any fundamental differences or novel issues in confidence calibration for Retrieval-Augmented Generation (RAG) compared to calibration in generation models without retrieval augmentation? | In RAG, additional context that the LLM may not know is augmented into the input, which differs from the process where the LLM generates responses solely based on pre-existing knowledge or a given answer. This additional context serves as a hint, creating a different scenario compared to the traditional tasks performed... | ['Table 1', 'Figure 1'] | ['images/1d93a1ae787879849a9853489f77e176cd417c85d200e14f7e189e5daf5e5093.jpg', 'images/ffcff06167847c9f219435a7800054a7da065c26d07942e5a3b2233b9ed79a7a.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Comparison with uncertainty calibration baselines. Table 1 presents a comparison of uncertainty-based baselines across four QA datasets. Our CalibRAG achieves both a lower ‘No Answer’ rate and higher accuracy compared to other baselines, achieving the accuracy of 35.03 and 39.91 on BioASQ and HotpotQA, respectively, ... | {'1': 'Comparison with uncertainty calibration baselines. Table 1 presents a comparison of uncertainty-based baselines across four QA datasets. Our CalibRAG achieves both a lower ‘No Answer’ rate and higher accuracy compared to other baselines, achieving the accuracy of 35.03 and 39.91 on BioASQ and HotpotQA, respectiv... | {'images/ffcff06167847c9f219435a7800054a7da065c26d07942e5a3b2233b9ed79a7a.jpg': '1'} | {'1': 'images/ffcff06167847c9f219435a7800054a7da065c26d07942e5a3b2233b9ed79a7a.jpg'} | {'images/1d93a1ae787879849a9853489f77e176cd417c85d200e14f7e189e5daf5e5093.jpg': '1'} | {'1': 'images/1d93a1ae787879849a9853489f77e176cd417c85d200e14f7e189e5daf5e5093.jpg'} | {} | ['Comparison with uncertainty calibration baselines. Table 1 presents a comparison of uncertainty-based baselines across four QA datasets. Our CalibRAG achieves both a lower ‘No Answer’ rate and higher accuracy compared to other baselines, achieving the accuracy of 35.03 and 39.91 on BioASQ and HotpotQA, respectively, ... | 42abe4e25bf7f5872f2e665998243fe7438870187bf54c81839510b58b5fea08 | 65e624095701a1080d5f73fc831b548c8a63296a |
explanation | What are the advantages of the proposed variance-preserving mechanism in the architecture? | Our variance-preserving mechanism embedded in the architecture enables model selection directly from the training loss by preserving prediction variance and consequently preventing the model from overfitting the training set when extreme hyper-parameter configurations are tested and strong distribution shifts happen. T... | ['Table 1', 'Figure 1'] | ['images/af8e0d1e88cefbb1b60f3d0310b373ef143a06241764b57cd015fcb81f95376c.jpg', 'images/949446e2d67f0ae6d9110e45b33c6dde0111de219eb62546f0e7c4b43fd47b82.jpg'] | ['mixed'] | 2 | 3 | 5 | {} | {} | {'images/864ea4fde44d7d950cf0a6545208af5190c39403349e790378caf742085537f7.jpg': '4', 'images/835cd1e80907b44c9fd7028ceb4d89d1522cf74150fb28e061c06a499eae8af8.jpg': '5', 'images/949446e2d67f0ae6d9110e45b33c6dde0111de219eb62546f0e7c4b43fd47b82.jpg': '1', 'images/da1e7a460119c378444bfc707f3936ce4167cdd89f28cd6b09351b56e33... | {'4': 'images/864ea4fde44d7d950cf0a6545208af5190c39403349e790378caf742085537f7.jpg', '5': 'images/835cd1e80907b44c9fd7028ceb4d89d1522cf74150fb28e061c06a499eae8af8.jpg', '1': 'images/949446e2d67f0ae6d9110e45b33c6dde0111de219eb62546f0e7c4b43fd47b82.jpg', '2': 'images/da1e7a460119c378444bfc707f3936ce4167cdd89f28cd6b09351b... | {'images/af8e0d1e88cefbb1b60f3d0310b373ef143a06241764b57cd015fcb81f95376c.jpg': '1'} | {'1': 'images/af8e0d1e88cefbb1b60f3d0310b373ef143a06241764b57cd015fcb81f95376c.jpg'} | {} | ['images/da1e7a460119c378444bfc707f3936ce4167cdd89f28cd6b09351b56e332463a.jpg', 'images/835cd1e80907b44c9fd7028ceb4d89d1522cf74150fb28e061c06a499eae8af8.jpg', 'images/864ea4fde44d7d950cf0a6545208af5190c39403349e790378caf742085537f7.jpg'] | d376b95e1f8c8b65d07e847649e99383256ba2c9c44c57606267c49c767ceffc | 6b582ea4a5145a03c831aa33976a9f67441057ae |
explanation | Why should SELFEE work? | We first remark that SELFEE begins with an LLM fine-tuned using DPO on the initial seed preference dataset; therefore, depending on the size of the seed dataset and the degree of distribution shift in new prompts for each iteration, the effectiveness of SELFEE can vary. However, our experiments (Table 1) show that it y... | ['Table 1', 'Figure 4'] | ['images/56ab973f0b003a4464bdc89f222272b0fe685f03571e28cf06274a35639da434.jpg', 'images/95e411fae3db89cb06a06270e17555220fe309c0679484a3fff2cd3fbabeea36.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Fig. 5(a) describes the changes in the response character length throughout the iteration process. From iteration 1 to iteration 4, the response length for Iterative DPO and SELFEE increased significantly (1418 →1709) and (1852 →2412), respectively. In contrast, PFP exhibited only a minimal increase in length (1138 →... | {'1': 'Fig. 5(a) describes the changes in the response character length throughout the iteration process. From iteration 1 to iteration 4, the response length for Iterative DPO and SELFEE increased significantly (1418 →1709) and (1852 →2412), respectively. In contrast, PFP exhibited only a minimal increase in length (1... | {'images/95e411fae3db89cb06a06270e17555220fe309c0679484a3fff2cd3fbabeea36.jpg': '4'} | {'4': 'images/95e411fae3db89cb06a06270e17555220fe309c0679484a3fff2cd3fbabeea36.jpg'} | {'images/56ab973f0b003a4464bdc89f222272b0fe685f03571e28cf06274a35639da434.jpg': '1', 'images/562f128d30d78018aa05bce14272be8ecf303982a9c1843f66724b6da84d7f54.jpg': '4'} | {'1': 'images/56ab973f0b003a4464bdc89f222272b0fe685f03571e28cf06274a35639da434.jpg', '4': 'images/562f128d30d78018aa05bce14272be8ecf303982a9c1843f66724b6da84d7f54.jpg'} | {} | ['Fig. 5(a) describes the changes in the response character length throughout the iteration process. From iteration 1 to iteration 4, the response length for Iterative DPO and SELFEE increased significantly (1418 →1709) and (1852 →2412), respectively. In contrast, PFP exhibited only a minimal increase in length (1138 →... | 13745cddb4137837dc61258323bde9569315729968633a2e6c05f83770e96230 | 729d9ddfbdd5e5b4eaf7653e8b760408d22d4650 |
explanation | What is the key novelty of the paper, particularly regarding the query-adaptive sampler? | Our key contribution lies in the application of query-adaptive frame sampling, which leverages the reasoning ability of the agents. Our approach is particularly focused on improving efficiency and performance when handling long-context videos. As demonstrated in the results (Table 4, Figure 4), our method enhances effi... | ['Table 4', 'Figure 4'] | ['images/73764241d9d6a380ba3f3fef1642353cfbff4f7e9065d255fb34126e54da777d.jpg', 'images/079ec5638365adb75ac75381f5b989af45df1b1819d52fd8b862be90fb25b7ef.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Planning/tool invoking At time step t, the agent L selects an action at and action input xt based on policy π in solving problem D. The actions A are the invokable tools, which are pre-defined and callable functions from the agent. The action input xt is typically the frame number, indicating which frames the tools s... | {'1': 'Planning/tool invoking At time step t, the agent L selects an action at and action input xt based on policy π in solving problem D. The actions A are the invokable tools, which are pre-defined and callable functions from the agent. The action input xt is typically the frame number, indicating which frames the to... | {'images/079ec5638365adb75ac75381f5b989af45df1b1819d52fd8b862be90fb25b7ef.jpg': '4'} | {'4': 'images/079ec5638365adb75ac75381f5b989af45df1b1819d52fd8b862be90fb25b7ef.jpg'} | {'images/69b93f43c76ca6c81d7acf0206b9c0830fef08dc8c2eccdb20f449b0a7e15f75.jpg': '5', 'images/73764241d9d6a380ba3f3fef1642353cfbff4f7e9065d255fb34126e54da777d.jpg': '4', 'images/6a05baf74f5f0377b3a8fd8eb35bd54c29ed1bc563444ee4cfd82c0df906642c.jpg': '8'} | {'5': 'images/69b93f43c76ca6c81d7acf0206b9c0830fef08dc8c2eccdb20f449b0a7e15f75.jpg', '4': 'images/73764241d9d6a380ba3f3fef1642353cfbff4f7e9065d255fb34126e54da777d.jpg', '8': 'images/6a05baf74f5f0377b3a8fd8eb35bd54c29ed1bc563444ee4cfd82c0df906642c.jpg'} | {} | ['images/69b93f43c76ca6c81d7acf0206b9c0830fef08dc8c2eccdb20f449b0a7e15f75.jpg', 'images/6a05baf74f5f0377b3a8fd8eb35bd54c29ed1bc563444ee4cfd82c0df906642c.jpg', 'Planning/tool invoking At time step t, the agent L selects an action at and action input xt based on policy π in solving problem D. The actions A are the invoka... | 09f828d9a90ed12c038fbf9fbc9635b31b4865415666f51ba283d8c76c6c8b04 | 80917e140b56b5b4d9459329a896fef9e483dacc |
explanation | How does the proposed method compare to DETR in terms of performance and inference speed? | Thanks for the concern. We would like to highlight that our DECO also outperforms DETR with the same settings, *i.e.*, training receipt, architecture etc. The comparisons are shown in Table 2 (as also shown in Figure 1 in supplementary material) and we can see that our DECO obtains better performance than DETR, which j... | ['Table 2', 'Figure 1'] | ['images/5c3bb75a3c6ada6985c4a487688a7f0fa40b6446a0f0dde0be232cc72bfca63d.jpg', 'images/aec1a617d8c4be2ce7b12411ab71b5a77c4e6bc697890acc05b8b8de9c395c34.jpg'] | ['mixed'] | 2 | 3 | 5 | {'DECO Encoder. Similar to DETR, a 1 × 1 convolution is first utilized to reduce the channel dimension of f from C to d and obtain a new feature map z0 ∈ℜd×H×W . In DETR, z0 is fed into stacked transformer encoder layers, which mainly consists of multi-head self-attention (MHSA) and feed-forward network (FFN) to perfor... | {'1': 'DECO Encoder. Similar to DETR, a 1 × 1 convolution is first utilized to reduce the channel dimension of f from C to d and obtain a new feature map z0 ∈ℜd×H×W . In DETR, z0 is fed into stacked transformer encoder layers, which mainly consists of multi-head self-attention (MHSA) and feed-forward network (FFN) to p... | {'images/aec1a617d8c4be2ce7b12411ab71b5a77c4e6bc697890acc05b8b8de9c395c34.jpg': '1'} | {'1': 'images/aec1a617d8c4be2ce7b12411ab71b5a77c4e6bc697890acc05b8b8de9c395c34.jpg'} | {'images/ce86eb6e32ea5f53607d5a4ec12f23ad6d10f8d3cc52aad8d11e95d797c7526a.jpg': '1', 'images/5c3bb75a3c6ada6985c4a487688a7f0fa40b6446a0f0dde0be232cc72bfca63d.jpg': '2'} | {'1': 'images/ce86eb6e32ea5f53607d5a4ec12f23ad6d10f8d3cc52aad8d11e95d797c7526a.jpg', '2': 'images/5c3bb75a3c6ada6985c4a487688a7f0fa40b6446a0f0dde0be232cc72bfca63d.jpg'} | {} | ['Meanwhile, some recent work rethinks the strong performance and reveal that the pure ConvNets could also achieve competitive performance via proper architecture design (Liu et al., 2022b; Yu et al., 2022). For example, ConvNeXt (Liu et al., 2022b) competes favorably with vision transformers like Swin Transformer (Liu... | a6962b92e1a1db20f165bb3e2736f2e535655de62c44f23849676eec30fdbdc8 | 859cadf9210afc0858163efe25c35e2f15290731 |
explanation | How do you generalize your approach to more complicated and rare compositions? | RareBench already includes the complicated rare composition cases (as the 'complex' case), consisting of three or more concepts, and R2F still exhibits superior performance on such complex cases as shown in Table 6. Specifically, looking at Figure 6, there is an example 'A horned bearded spotted raccoon smiling' from t... | ['Table 6', 'Figure 6'] | ['images/0b5cb04ba6b819219bdae29194748fe720dbd54de165be822a80f0345d14d6b5.jpg', 'images/490f062e589415340fffe20ccdd9705368cc032e994ff34727f920548537a57d.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Efficacy of Alternating Guidance. Figure 8 and Table 6 show the qualitative and quantitative analysis of the R2F’s alternating guidance compared to other possible guidance choices. We apply three guidance choices, (1) Linear interpolation (Interpolate) of latents as in Theorem 3.1, and bring the idea of (2) Composabl... | {'1': 'Efficacy of Alternating Guidance. Figure 8 and Table 6 show the qualitative and quantitative analysis of the R2F’s alternating guidance compared to other possible guidance choices. We apply three guidance choices, (1) Linear interpolation (Interpolate) of latents as in Theorem 3.1, and bring the idea of (2) Comp... | {'images/490f062e589415340fffe20ccdd9705368cc032e994ff34727f920548537a57d.jpg': '6'} | {'6': 'images/490f062e589415340fffe20ccdd9705368cc032e994ff34727f920548537a57d.jpg'} | {'images/523c879d828890d674f8c25830a6eb2e9e9f5e1eefe733086146f7153d4b58e4.jpg': '2', 'images/0b5cb04ba6b819219bdae29194748fe720dbd54de165be822a80f0345d14d6b5.jpg': '6'} | {'2': 'images/523c879d828890d674f8c25830a6eb2e9e9f5e1eefe733086146f7153d4b58e4.jpg', '6': 'images/0b5cb04ba6b819219bdae29194748fe720dbd54de165be822a80f0345d14d6b5.jpg'} | {} | ['images/523c879d828890d674f8c25830a6eb2e9e9f5e1eefe733086146f7153d4b58e4.jpg', 'Efficacy of Alternating Guidance. Figure 8 and Table 6 show the qualitative and quantitative analysis of the R2F’s alternating guidance compared to other possible guidance choices. We apply three guidance choices, (1) Linear interpolation ... | 73f91a08d41dc1714651ac65380475e5d60f0ac5571a09135c97c84643d244fe | 87b23be1436dcbe59f7359a900e9813e81087437 |
explanation | What are the practical uses of crystal symmetry generation in academia or industry? | Our main claim is that SymmCD performs significantly better than prior works at generating crystals with realistic, diverse symmetries, as seen in Figure 4 and Table 1. Many properties of crystals (such as piezoelectricity and optical activity) are determined by symmetry, so when searching for practical crystals, a gen... | ['Figure 4', 'Table 1'] | ['images/07b2a3283c639511e149708c5a5a98c631cd92a3b3e9b5ba6167f66e51a8374c.jpg', 'images/7aa13bce6f772da2cea00e5222dff0acc466a227e537bc4e9b71858c5da5e504.jpg'] | ['mixed'] | 2 | 3 | 5 | {'We empirically demonstrate our contributions, particularly in ensuring we generate crystals with desired symmetries while being competitive with existing baselines. In other words, we show that SymmCD generates symmetric, stable, and valid crystals. We compare our proposed method with four recent strong baselines: CD... | {'1': 'We empirically demonstrate our contributions, particularly in ensuring we generate crystals with desired symmetries while being competitive with existing baselines. In other words, we show that SymmCD generates symmetric, stable, and valid crystals. We compare our proposed method with four recent strong baseline... | {'images/85d305a053ad51ec1c5ea92d18a72fbc0278174eb6a15776b9fcd72ffa4b5a9f.jpg': '3', 'images/07b2a3283c639511e149708c5a5a98c631cd92a3b3e9b5ba6167f66e51a8374c.jpg': '4'} | {'3': 'images/85d305a053ad51ec1c5ea92d18a72fbc0278174eb6a15776b9fcd72ffa4b5a9f.jpg', '4': 'images/07b2a3283c639511e149708c5a5a98c631cd92a3b3e9b5ba6167f66e51a8374c.jpg'} | {'images/7aa13bce6f772da2cea00e5222dff0acc466a227e537bc4e9b71858c5da5e504.jpg': '1'} | {'1': 'images/7aa13bce6f772da2cea00e5222dff0acc466a227e537bc4e9b71858c5da5e504.jpg'} | {} | ['We empirically demonstrate our contributions, particularly in ensuring we generate crystals with desired symmetries while being competitive with existing baselines. In other words, we show that SymmCD generates symmetric, stable, and valid crystals. We compare our proposed method with four recent strong baselines: CD... | 84384d9a293ea5d8aa6f43c33e3336541e733d203e9c9bfa96542fd3b5754725 | 999ece922a421954932ad2717fc2f68b13d513cc |
explanation | How does the tokenizer-level decoding method affect the model's performance? | We want to clarify that our token-level graph-constrained decoding would not lead to entities or relationships that do not exist in KGs. During decoding, we use the KG-Trie to restrict the tokens generated by the LLM to those starting with valid prefixes stored in the Trie. This approach has been used by previous metho... | ['Figure 5', 'Table 2'] | ['images/a9c22c25f16dacbfe6afb009ac4154c18ce7d5cd88de363eac9ae889381dc7f6.jpg', 'images/b4366059ed83815dbff2897ba35dcac60bd79f6eff2e3af466c087f34a206fc6.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Large language models (LLMs) have strong reasoning capabilities but still suffer from severe hallucination issues, which undermines the trustworthiness of the reasoning process. To tackle this issue, we propose graph-constrained decoding, which unifies the reasoning ability of LLMs with the structured knowledge in KG... | {'1': 'Large language models (LLMs) have strong reasoning capabilities but still suffer from severe hallucination issues, which undermines the trustworthiness of the reasoning process. To tackle this issue, we propose graph-constrained decoding, which unifies the reasoning ability of LLMs with the structured knowledge ... | {'images/a9c22c25f16dacbfe6afb009ac4154c18ce7d5cd88de363eac9ae889381dc7f6.jpg': '5', 'images/f2e85e495bbd0facb7ad7758f9e65d318165e81151d0cd3f74811ff9db0793a0.jpg': '2'} | {'5': 'images/a9c22c25f16dacbfe6afb009ac4154c18ce7d5cd88de363eac9ae889381dc7f6.jpg', '2': 'images/f2e85e495bbd0facb7ad7758f9e65d318165e81151d0cd3f74811ff9db0793a0.jpg'} | {'images/b4366059ed83815dbff2897ba35dcac60bd79f6eff2e3af466c087f34a206fc6.jpg': '2', 'images/2b0e6d3dfedbcec23470e74c3999e07ad4ba2415833bd605f829d2d3c8634e86.jpg': '4'} | {'2': 'images/b4366059ed83815dbff2897ba35dcac60bd79f6eff2e3af466c087f34a206fc6.jpg', '4': 'images/2b0e6d3dfedbcec23470e74c3999e07ad4ba2415833bd605f829d2d3c8634e86.jpg'} | {} | ['images/2b0e6d3dfedbcec23470e74c3999e07ad4ba2415833bd605f829d2d3c8634e86.jpg', 'images/f2e85e495bbd0facb7ad7758f9e65d318165e81151d0cd3f74811ff9db0793a0.jpg', 'Large language models (LLMs) have strong reasoning capabilities but still suffer from severe hallucination issues, which undermines the trustworthiness of the r... | e6a6776b0a81cdcfcd35e1bd0f5e9eb909bbed8679c20eb5d92793430d230f84 | a93a8af29009c03fc1e9cb53ca6471568eb580a5 |
explanation | What evidence supports that the improvement comes from the proposed diffusion policy-constrained iteration rather than the Q-ensemble? | We have to emphasize that the improvement of our proposed method over others is not solely based on high scores in the testing environments, but also on the stability of convergence. To demonstrate that the majority of the improvement stems from the proposed soft Q-guidance rather than the Q-ensemble, we have included ... | ['Figure 3', 'Table 1'] | ['images/fe9b6e3caf55686bb4d3c144cac0a0668aba9ef004d9cd1d8eb13373c6ef5d3c.jpg', 'images/5ac63d1ac28ead1c5e2e523a3f221a3ebb2bf5d6198835d823817dddb1497d2a.jpg'] | ['mixed'] | 2 | 3 | 5 | {'A natural approach to employing diffusion models in behavior cloning involves replacing the noise predictor with a state-conditional model ϵθ(xt, s, t) that generates actions x0 ∈A based on state s. ': '1', 'In this section, we introduce the Diffusion Actor-Critic (DAC) framework that models the target policy directl... | {'1': 'A natural approach to employing diffusion models in behavior cloning involves replacing the noise predictor with a state-conditional model ϵθ(xt, s, t) that generates actions x0 ∈A based on state s. ', '2': 'In this section, we introduce the Diffusion Actor-Critic (DAC) framework that models the target policy di... | {'images/f493cfe89e44d120988d5d913ae790d2915d73bde486e42e462583601c2cd850.jpg': '4', 'images/fe9b6e3caf55686bb4d3c144cac0a0668aba9ef004d9cd1d8eb13373c6ef5d3c.jpg': '3'} | {'4': 'images/f493cfe89e44d120988d5d913ae790d2915d73bde486e42e462583601c2cd850.jpg', '3': 'images/fe9b6e3caf55686bb4d3c144cac0a0668aba9ef004d9cd1d8eb13373c6ef5d3c.jpg'} | {'images/5ac63d1ac28ead1c5e2e523a3f221a3ebb2bf5d6198835d823817dddb1497d2a.jpg': '1'} | {'1': 'images/5ac63d1ac28ead1c5e2e523a3f221a3ebb2bf5d6198835d823817dddb1497d2a.jpg'} | {} | ['In this section, we introduce the Diffusion Actor-Critic (DAC) framework that models the target policy directly as a diffusion model, eliminating the need for density estimation of either the behavior policy or the target policy. Initially, we formulate the KL constraint policy optimization as a diffusion noise regre... | bff49264fb79cf5c46b980c620440e987355c2465918d80f3400f7ea8b807b5e | b0fbc4860d3a1995a411e7559c6961f48a7cda5e |
explanation | More scrutiny of the physics-informed losses would be beneficial. Some plots of solutions and errors across the poorer performing methods might help understand why they are performing badly. Is it that boundary conditions are not being adhered to? Maybe there are regions of high PDE loss in the resulting solution? Perh... | We have already plotted the solutions to the poor performance in Figure 4 (c). The figure shows that the boundary condition is strictly obeyed for every network because we use weight=100 for boundary loss and weight=1 for residual loss. Besides, we did experiments on larger weights of boundary conditions to have a more... | ['Figure 4', 'Table 3'] | ['images/fd9ec57bfd1aa96760031234b763c2614267e736af279f75e746b5661a9956da.jpg', 'images/c5db1317f66009d3c2aba6ceb5a67bfa25ef5402ea5b35c45592df5a2f2b76b3.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Physics-informed neural networks (PINNs) Lagaris et al. (1998); Raissi et al. (2019) are a method used to solve partial differential equations (PDEs) by integrating physical laws with neural networks in machine learning. The use of Kolmogorov-Arnold Networks (KANs) in PINNs has been explored and is referred to as Phy... | {'1': 'Physics-informed neural networks (PINNs) Lagaris et al. (1998); Raissi et al. (2019) are a method used to solve partial differential equations (PDEs) by integrating physical laws with neural networks in machine learning. The use of Kolmogorov-Arnold Networks (KANs) in PINNs has been explored and is referred to a... | {'images/fd9ec57bfd1aa96760031234b763c2614267e736af279f75e746b5661a9956da.jpg': '4', 'images/9e0c873dd53288bb6b55aa30e6e2ec6ec0df2ab90da0179fa312ded2fd9060d2.jpg': '2'} | {'4': 'images/fd9ec57bfd1aa96760031234b763c2614267e736af279f75e746b5661a9956da.jpg', '2': 'images/9e0c873dd53288bb6b55aa30e6e2ec6ec0df2ab90da0179fa312ded2fd9060d2.jpg'} | {'images/ebae19bc3640cff886b2ec64f7bc1317fc2ee7a4d81adf69998f9b3babd55b96.jpg': '2', 'images/c5db1317f66009d3c2aba6ceb5a67bfa25ef5402ea5b35c45592df5a2f2b76b3.jpg': '3'} | {'2': 'images/ebae19bc3640cff886b2ec64f7bc1317fc2ee7a4d81adf69998f9b3babd55b96.jpg', '3': 'images/c5db1317f66009d3c2aba6ceb5a67bfa25ef5402ea5b35c45592df5a2f2b76b3.jpg'} | {} | ['images/9e0c873dd53288bb6b55aa30e6e2ec6ec0df2ab90da0179fa312ded2fd9060d2.jpg', 'images/ebae19bc3640cff886b2ec64f7bc1317fc2ee7a4d81adf69998f9b3babd55b96.jpg', 'Physics-informed neural networks (PINNs) Lagaris et al. (1998); Raissi et al. (2019) are a method used to solve partial differential equations (PDEs) by integra... | 7b4312c0282f7827977689475824799ec9bcee735d135a31f21774590f086a44 | b2625752041c98c9978af6d3f403718dc2e532ba |
explanation | What verification process is in place for the key insight mentioned in the paper? | Please see 'Response to common comments' above for how this insight is verified through Figure 4. Our key insight states that if a model cannot generate consistently correct responses (sampled with a temperature of 1.0) across k trials, then the same model will struggle to distinguish between these k responses. Table 4... | ['Figure 4', 'Table 4'] | ['images/f3451da79021da5c0980e252154f9755a3a290a822227dad6e71ba74ff046351.jpg', 'images/d335b997830106d49063ae1967cce250e8eed91de9515e78bb78496a0dd11ff7.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Figure 1: Comparison of JudgeBench against previous works. Unlike previous works which focus on instruction following or stylistic preferences, the focus of JudgeBench is on evaluating the factual and logical correctness of complex responses to challenging questions. JudgeBench is noticeably more difficult than previ... | {'1': 'Figure 1: Comparison of JudgeBench against previous works. Unlike previous works which focus on instruction following or stylistic preferences, the focus of JudgeBench is on evaluating the factual and logical correctness of complex responses to challenging questions. JudgeBench is noticeably more difficult than ... | {'images/f3451da79021da5c0980e252154f9755a3a290a822227dad6e71ba74ff046351.jpg': '4'} | {'4': 'images/f3451da79021da5c0980e252154f9755a3a290a822227dad6e71ba74ff046351.jpg'} | {'images/d335b997830106d49063ae1967cce250e8eed91de9515e78bb78496a0dd11ff7.jpg': '4'} | {'4': 'images/d335b997830106d49063ae1967cce250e8eed91de9515e78bb78496a0dd11ff7.jpg'} | {} | ['Figure 1: Comparison of JudgeBench against previous works. Unlike previous works which focus on instruction following or stylistic preferences, the focus of JudgeBench is on evaluating the factual and logical correctness of complex responses to challenging questions. JudgeBench is noticeably more difficult than previ... | 17c6e6a32c763123494b9c7792e1d9ee15a288f3bad4e98dfd89a1980c2dd308 | c6da374332587e75991c772444d2fe81a84cf9c8 |
explanation | What is the motivation of using vector quantization into spatiotemporal prediction? | Our findings reveal that this belief does not hold true for the majority of state-of-the-art VQ methods, as demonstrated in Table 4 and Figure 5 on page 8 of our paper. We conducted experiments by varying the size of the codebook, from small to large, and found that none led to improved outcomes. Although a larger code... | ['Table 4', 'Figure 5'] | ['images/10719f212eaa31bd5eafbe3a45dce00fef2cc542ff52c8644bc7f47bc5ccf51b.jpg', 'images/a29b3a4f041b6c3762604546a5508ab6d5237cb7f71382ff31c412421c465415.jpg'] | ['mixed'] | 2 | 3 | 5 | {'with probability at least 1 −ε. Therefore, ∥g′−g∥2 ≥(1 + ∆)−1∥Ug −Ug′∥2. Since the s-sparse unit vector covering number is bounded by (Cm/sδ)s, we establish: ': '1'} | {'1': 'with probability at least 1 −ε. Therefore, ∥g′−g∥2 ≥(1 + ∆)−1∥Ug −Ug′∥2. Since the s-sparse unit vector covering number is bounded by (Cm/sδ)s, we establish: '} | {'images/a29b3a4f041b6c3762604546a5508ab6d5237cb7f71382ff31c412421c465415.jpg': '5', 'images/78acd0f2f4d9f888a1ccb28628e112d46ae9bd6922c7ebde2f114e26b75436bd.jpg': '4', 'images/a719b7d7290b4604bfaa233f79551b8f9f68a5eaab0c85e5a3cba9465325c48d.jpg': '1'} | {'5': 'images/a29b3a4f041b6c3762604546a5508ab6d5237cb7f71382ff31c412421c465415.jpg', '4': 'images/78acd0f2f4d9f888a1ccb28628e112d46ae9bd6922c7ebde2f114e26b75436bd.jpg', '1': 'images/a719b7d7290b4604bfaa233f79551b8f9f68a5eaab0c85e5a3cba9465325c48d.jpg'} | {'images/10719f212eaa31bd5eafbe3a45dce00fef2cc542ff52c8644bc7f47bc5ccf51b.jpg': '4'} | {'4': 'images/10719f212eaa31bd5eafbe3a45dce00fef2cc542ff52c8644bc7f47bc5ccf51b.jpg'} | {} | ['images/78acd0f2f4d9f888a1ccb28628e112d46ae9bd6922c7ebde2f114e26b75436bd.jpg', 'images/a719b7d7290b4604bfaa233f79551b8f9f68a5eaab0c85e5a3cba9465325c48d.jpg', 'with probability at least 1 −ε. Therefore, ∥g′−g∥2 ≥(1 + ∆)−1∥Ug −Ug′∥2. Since the s-sparse unit vector covering number is bounded by (Cm/sδ)s, we establish: '] | 08e9c7668c3f06734523fb27dc073e5f4a26b88fb8305a536bbe8c097cc45fd7 | ca6147914709aec09e7b238aac57b2e654fc45c8 |
explanation | Have the authors considered techniques to make the trigger less detectable? | To quantify the visual stealthiness of a trigger, we use a computer vision model as the judge. We trained a benign global model on clean data under the same training settings as the victim FL system, using it as the judge model. We consider a trigger to have good visual stealthiness if its poisoned data can maintain hi... | ['Table 3', 'Figure 3'] | ['images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg', 'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg'] | ['mixed'] | 2 | 3 | 5 | {'The capability of malicious clients in our attack is limited to the manipulation of their local training data that are input to their training pipelines. In addition, in line with existing works (Lyu et al., 2023; Zhang et al., 2024; Fang & Chen, 2023; Gong et al., 2022), we do not assume the secrecy of the global mo... | {'1': 'The capability of malicious clients in our attack is limited to the manipulation of their local training data that are input to their training pipelines. In addition, in line with existing works (Lyu et al., 2023; Zhang et al., 2024; Fang & Chen, 2023; Gong et al., 2022), we do not assume the secrecy of the glob... | {'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg': '3'} | {'3': 'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg'} | {'images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg': '3'} | {'3': 'images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg'} | {} | ['Datasets and global models: We evaluated DPOT on four classification datasets with non-IID data distributions: Fashion MNIST, FEMNIST, CIFAR10, and Tiny ImageNet. Table 4 summarizes their basic information and models we used on each dataset. ', 'The capability of malicious clients in our attack is limited to the mani... | d470c9223d5f671f73f08d91acc25b519cf383bd485a14a39da46ff8742d04c9 | d48240fbd51a9bc4ee932e076defb133e9ee5288 |
explanation | How is the pixel count determined in practice? | Based on the ASR results in Table 3 and Figure 3, we selected the trigger size by balancing the trade-off between attack performance and visual stealthiness—a larger trigger size results in a higher ASR but lower benign accuracy. We set the lower bound for 'Drop' at -30% and the lower bound for 'Final ASR' at 50%, and ... | ['Table 3', 'Figure 3'] | ['images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg', 'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg'] | ['mixed'] | 2 | 3 | 5 | {'• Trigger size. The number of pixels that a backdoor trigger can alter is specified by the trigger size attribute. Selection of trigger sizes for various datasets are discussed in Appendix D.3. ': '1', 'Existing defenses against backdoor attacks in FL rely on a hypothesis that backdoor attacks will always cause the u... | {'1': '• Trigger size. The number of pixels that a backdoor trigger can alter is specified by the trigger size attribute. Selection of trigger sizes for various datasets are discussed in Appendix D.3. ', '2': 'Existing defenses against backdoor attacks in FL rely on a hypothesis that backdoor attacks will always cause ... | {'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg': '3'} | {'3': 'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg'} | {'images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg': '3', 'images/04648f4d66c2bf3ffebc2e5468af1f870aa7099fce095e3ef919fbe9fdde3cff.jpg': '2'} | {'3': 'images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg', '2': 'images/04648f4d66c2bf3ffebc2e5468af1f870aa7099fce095e3ef919fbe9fdde3cff.jpg'} | {} | ['images/04648f4d66c2bf3ffebc2e5468af1f870aa7099fce095e3ef919fbe9fdde3cff.jpg', 'Existing defenses against backdoor attacks in FL rely on a hypothesis that backdoor attacks will always cause the updating direction of a model to deviate from its original benign objective, because the backdoor objectives defined by backd... | d374c1df2439597a2bc212b3818e23a4b1137f6607d84bbfd435ef62542743bb | d48240fbd51a9bc4ee932e076defb133e9ee5288 |
MCiteBench is a benchmark for evaluating the ability of Multimodal Large Language Models (MLLMs) to generate text with citations in multimodal contexts.
Please download the MCiteBench_full_dataset.zip. It contains the data.jsonl file and the visual_resources folder.
The data format for data_example.jsonl and data.jsonl is as follows:
question_type: [str] # The type of question, with possible values: "explanation" or "locating"
question: [str] # The text of the question
answer: [str] # The answer to the question, which can be a string, list, float, or integer, depending on the context
evidence_keys: [list] # A list of abstract references or identifiers for evidence, such as "section x", "line y", "figure z", or "table k".
# These are not the actual content but pointers or descriptions indicating where the evidence can be found.
# Example: ["section 2.1", "line 45", "Figure 3"]
evidence_contents: [list] # A list of resolved or actual evidence content corresponding to the `evidence_keys`.
# These can include text excerpts, image file paths, or table file paths that provide the actual evidence for the answer.
# Each item in this list corresponds directly to the same-index item in `evidence_keys`.
# Example: ["This is the content of section 2.1.", "/path/to/figure_3.jpg"]
evidence_modal: [str] # The modality type of the evidence, with possible values: ['figure', 'table', 'text', 'mixed'] indicating the source type of the evidence
evidence_count: [int] # The total count of all evidence related to the question
distractor_count: [int] # The total number of distractor items, meaning information blocks that are irrelevant or misleading for the answer
info_count: [int] # The total number of information blocks in the document, including text, tables, images, etc.
text_2_idx: [dict[str, str]] # A dictionary mapping text information to corresponding indices
idx_2_text: [dict[str, str]] # A reverse dictionary mapping indices back to the corresponding text content
image_2_idx: [dict[str, str]] # A dictionary mapping image paths to corresponding indices
idx_2_image: [dict[str, str]] # A reverse dictionary mapping indices back to image paths
table_2_idx: [dict[str, str]] # A dictionary mapping table paths to corresponding indices
idx_2_table: [dict[str, str]] # A reverse dictionary mapping indices back to table paths
meta_data: [dict] # Additional metadata used during the construction of the data
distractor_contents: [list] # Similar to `evidence_contents`, but contains distractors, which are irrelevant or misleading information
question_id: [str] # The ID of the question
pdf_id: [str] # The ID of the associated PDF document
If you find MCiteBench useful for your research and applications, please kindly cite using this BibTeX:
@article{hu2025mcitebench,
title={MCiteBench: A Benchmark for Multimodal Citation Text Generation in MLLMs},
author={Hu, Caiyu and Zhang, Yikai and Zhu, Tinghui and Ye, Yiwei and Xiao, Yanghua},
journal={arXiv preprint arXiv:2503.02589},
year={2025}
}