
Visual perception of moving objects is integral to our day-to-day life, integrating visual spatial and temporal perception. Most research studies have focused on finding the brain regions activated during motion perception. However, an empirically validated general mathematical model is required to understand the modulation of the motion perception. Here, we develop a mathematical formulation of the modulation of the perception of a moving object due to a change in speed, under the formulation of the invariance of causality.
We formulated the perception of a moving object as the coordinate transformation from a retinotopic space onto perceptual space and derived a quantitative relationship between spatiotemporal coordinates. To validate our model, we undertook the analysis of two experiments: (i) the perceived length of the moving arc, and (ii) the perceived time while observing moving stimuli. We performed a magnetic resonance imaging (MRI) tractography investigation of subjects to demarcate the anatomical correlation of the modulation of the perception of moving objects.
Our theoretical model shows that the interaction between visual-spatial and temporal perception, during the perception of moving object is described by coupled linear equations; and experimental observations validate our model. We observed that cerebral area V5 may be an anatomical correlate for this interaction. The physiological basis of interaction is shown by a Lotka-Volterra system delineating interplay between acetylcholine and dopamine neurotransmitters, whose concentrations vary periodically with the orthogonal phase shift between them, occurring at the axodendritic synapse of complex cells at area V5.
Under the invariance of causality in the representation of events in retinotopic space and perceptual space, the speed modulates the perception of a moving object. This modulation may be due to variations of the tuning properties of complex cells at area V5 due to the dynamic interaction between acetylcholine and dopamine. Our analysis is the first significant study, to our knowledge, that establishes a mathematical linkage between motion perception and causality invariance.
Citation: Pratik Purohit, Prasun K. Roy. Interaction between spatial perception and temporal perception enables preservation of cause-effect relationship: Visual psychophysics and neuronal dynamics[J]. Mathematical Biosciences and Engineering, 2023, 20(5): 9101-9134. doi: 10.3934/mbe.2023400
[1] | Lamia Fatiha KAZI TANI, Mohammed Yassine KAZI TANI, Benamar KADRI . Gas-Net: A deep neural network for gastric tumor semantic segmentation. AIMS Bioengineering, 2022, 9(3): 266-282. doi: 10.3934/bioeng.2022018 |
[2] | Meiling Sun, Changlei Cui . Transforming lung cancer care: Synergizing artificial intelligence and clinical expertise for precision diagnosis and treatment. AIMS Bioengineering, 2023, 10(3): 331-361. doi: 10.3934/bioeng.2023020 |
[3] | Sumaiya Noor, Salman A. AlQahtani, Salman Khan . Chronic liver disease detection using ranking and projection-based feature optimization with deep learning. AIMS Bioengineering, 2025, 12(1): 50-68. doi: 10.3934/bioeng.2025003 |
[4] | Kuna Dhananjay Rao, Mudunuru Satya Dev Kumar, Paidi Pavani, Darapureddy Akshitha, Kagitha Nagamaleswara Rao, Hafiz Tayyab Rauf, Mohamed Sharaf . Cardiovascular disease prediction using hyperparameters-tuned LSTM considering COVID-19 with experimental validation. AIMS Bioengineering, 2023, 10(3): 265-282. doi: 10.3934/bioeng.2023017 |
[5] | Islam Uddin, Salman A. AlQahtani, Sumaiya Noor, Salman Khan . Deep-m6Am: a deep learning model for identifying N6, 2′-O-Dimethyladenosine (m6Am) sites using hybrid features. AIMS Bioengineering, 2025, 12(1): 145-161. doi: 10.3934/bioeng.2025006 |
[6] | Shital Hajare, Rajendra Rewatkar, K.T.V. Reddy . Design of an iterative method for enhanced early prediction of acute coronary syndrome using XAI analysis. AIMS Bioengineering, 2024, 11(3): 301-322. doi: 10.3934/bioeng.2024016 |
[7] | Abdulmajeed Alsufyani . Performance comparison of deep learning models for MRI-based brain tumor detection. AIMS Bioengineering, 2025, 12(1): 1-21. doi: 10.3934/bioeng.2025001 |
[8] | Almudena Espín-Pérez, Selen Bozkurt, Hong Zheng, Aleksandra Nivina . Artificial intelligence and data science applied to bioengineering. AIMS Bioengineering, 2021, 8(1): 93-94. doi: 10.3934/bioeng.2021009 |
[9] | Artur Luczak . How artificial intelligence reduces human bias in diagnostics?. AIMS Bioengineering, 2025, 12(1): 69-89. doi: 10.3934/bioeng.2025004 |
[10] | Mujibullah Sheikh, Pranita S. Jirvankar . Harnessing artificial intelligence for enhanced nanoparticle design in precision oncology. AIMS Bioengineering, 2024, 11(4): 574-597. doi: 10.3934/bioeng.2024026 |
Visual perception of moving objects is integral to our day-to-day life, integrating visual spatial and temporal perception. Most research studies have focused on finding the brain regions activated during motion perception. However, an empirically validated general mathematical model is required to understand the modulation of the motion perception. Here, we develop a mathematical formulation of the modulation of the perception of a moving object due to a change in speed, under the formulation of the invariance of causality.
We formulated the perception of a moving object as the coordinate transformation from a retinotopic space onto perceptual space and derived a quantitative relationship between spatiotemporal coordinates. To validate our model, we undertook the analysis of two experiments: (i) the perceived length of the moving arc, and (ii) the perceived time while observing moving stimuli. We performed a magnetic resonance imaging (MRI) tractography investigation of subjects to demarcate the anatomical correlation of the modulation of the perception of moving objects.
Our theoretical model shows that the interaction between visual-spatial and temporal perception, during the perception of moving object is described by coupled linear equations; and experimental observations validate our model. We observed that cerebral area V5 may be an anatomical correlate for this interaction. The physiological basis of interaction is shown by a Lotka-Volterra system delineating interplay between acetylcholine and dopamine neurotransmitters, whose concentrations vary periodically with the orthogonal phase shift between them, occurring at the axodendritic synapse of complex cells at area V5.
Under the invariance of causality in the representation of events in retinotopic space and perceptual space, the speed modulates the perception of a moving object. This modulation may be due to variations of the tuning properties of complex cells at area V5 due to the dynamic interaction between acetylcholine and dopamine. Our analysis is the first significant study, to our knowledge, that establishes a mathematical linkage between motion perception and causality invariance.
As per the report of the World Health Organization (WHO), approximately 10 million people globally were affected by cancer in 2020 [1],[2]. Breast cancer accounted for 2.26 million cases, lung cancer for 2.21 million, rectum and colon cancer for 1.93 million, skin cancer for 1.20 million, and stomach cancer for 1.09 million [2]. In India, nearly 2.7 million people are affected by cancer and every year, 13.9 one hundred thousand new cancer patients are identified [2]. Overall, according to cancer statistics in India (2020), 8.5 one hundred thousand deaths are caused by the cancer. Figure 1 shows the distribution of cancer-related deaths in both male and female populations for the year 2021. Among various cancer types, lung and bronchus cancer reflect a significant proportion of cases. The data underscores the critical impact of cancer on public health, highlighting the urgency of targeted interventions and research efforts.
The incidence and complexity of cancer pose a serious threat to world health, necessitating novel strategies for earlier identification and better management. A new age in healthcare has begun with the introduction of AI, which has shown promising results in cancer detection, diagnosis, and therapy. This review paper explores various domains in which AI may be helpful in the identification of cancer, emphasizing its benefits for early diagnosis, precise treatment, and patient-centered care. A key element determining treatment success and patients' survival rates is early cancer identification. Traditional diagnostic techniques frequently rely on the arbitrary interpretation of pathology and radiology scans by individuals. However, AI systems have shown impressive results in improving the diagnosis procedure. Deep learning (DL) methods allow AI models to precisely recognize tiny irregularities and medical imaging patterns that aid in early detection and lower false negatives [3].
These advancements have the potential to impact cancer management and patient prognosis significantly. Additionally, AI-driven risk prediction models have emerged as valuable tools for identifying individuals at high risk of developing specific cancers. Through the analysis of diverse datasets that encompass genetic, lifestyle, and clinical information, AI algorithms can stratify individuals based on their susceptibility to certain malignancies. This stratification enables the tailoring of personalized screening protocols and preventive interventions, contributing to improved early detection rates [4],[5]. Precision medicine, another paradigm-shifting concept, relies on the understanding of individual patient's unique genetic and molecular profiles to guide treatment decisions. AI's capability to process large-scale genomic datasets has fueled the discovery of novel biomarkers and genetic mutations associated with cancer susceptibility and progression [6],[7]. These insights empower oncologists to design personalized treatment regimens, thereby increasing the likelihood of therapeutic success and minimizing adverse effects. In addition to diagnostics and precision medicine, AI technologies have revolutionized treatment planning and monitoring. Real-time analysis of patient data, coupled with machine learning (ML) algorithms, empowers healthcare professionals to dynamically adjust treatment strategies based on evolving patient responses [8],[9]. Such capabilities promote a patient-centric approach to cancer care, optimizing treatment efficacy and enhancing the quality of life for individuals undergoing therapy. However, the integration of AI into cancer detection is not devoid of challenges. Data privacy concerns, the need for robust validation, and the potential biases in algorithmic decision-making warrant careful consideration [10],[11]. Collaboration between clinicians, data scientists, and regulatory bodies is essential to ensure a responsible and ethical development of AI technologies in oncology [12],[13].
This review article comprehensively explores the multifaceted applications of AI in cancer detection, encompassing radiology, pathology, genomics, and clinical decision support systems. By examining the current state of the field, addressing challenges, and envisioning future directions, the study aims to elucidate the transformative impact of AI on reshaping the landscape of cancer detection and care, ultimately contributing to improved patient outcomes.
The role of AI in cancer detection is becoming increasingly significant due to its potential to enhance accuracy and efficiency in the assessment of risk and early diagnosis [2],[5]–[8]. AI tools, particularly ML and DL, are revolutionizing the field of oncology by assisting medical professionals in identifying cancerous tissues and anomalies with improved precision [3]. The AI algorithms, particularly spatial algorithms, leverage data from various cancer diagnostic techniques such as magnetic resonance imaging (MRI), computed tomography (CT) scans, and blood tests, enabling quicker and more accurate cancer diagnoses compared to traditional methods. Beyond diagnosis, AI is employed for treatment planning and patient monitoring, thereby contributing to improved patient outcomes. In the realm of cancer detection, AI's subfield of ML focuses on using data and algorithms to learn and predict events with minimal human involvement. This capability finds applications in various domains including medical diagnostics, speech recognition, email screening, and more. ML algorithms such as random forest, K-nearest neighbors (KNN), and support vector machine (SVM) can expedite and enhance cancer identification. For example, the random forest method has demonstrated its ability to identify early-stage breast cancer by utilizing imaging data effectively [6]. As AI continues to evolve, it holds the promise of further transforming cancer detection and diagnosis, optimizing the utilization of medical data to provide timely and accurate insights that benefit both patients and medical professionals [14],[15]. Figure 2 illustrates the sequential steps required for the preparation of a predictive ML model. These steps are crucial to ensure optimal model performance and the accurate generalization of predictions to new data.
The initial step involves precisely articulating the problem that ML is going to address. Understanding the core business or research objective and the specific predictions or classifications is an essential step.
Data serves as the foundation of any ML model. Relevant data should be collected to address the identified problem. The data must be clean, organized, and reflective of real-world scenarios. Data preprocessing tasks, including cleaning, normalization, and feature extraction may be necessary.
Dividing the dataset into distinct subsets is crucial. The data is typically divided into training, validation, and test sets. The validation set assists in fine-tuning hyperparameters, the test set evaluates the final model's performance on previously unseen data, and the training set is used to train the model.
Selecting an appropriate ML algorithm depends on the problem type (classification, regression, clustering, etc.) and the characteristics of the dataset. Common algorithms such as decision trees, random forests, support vector machines, and neural networks can be considered.
Once the algorithm is chosen, it is time to implement it using a suitable ML library like scikit-learn, TensorFlow, or PyTorch. This involves configuring hyperparameters that control the learning process and training the model on the training data.
Validation and tuning are essential steps to refine the model's performance. Minor adjustments to hyperparameters can be made using the validation set. This process, known as hyperparameter tuning, helps determine the optimal configuration of settings to enhance the model's efficacy.
The steps outlined above provide a comprehensive framework for preparing a cancer detection model using ML. By following these steps, one can systematically approach the development of a predictive model that aims to detect cancer with accuracy and reliability. From defining the problem and data collection to selecting an appropriate algorithm and fine-tuning the model, each step contributes to creating a robust and effective cancer detection solution.
One common approach is using an ML algorithm to identify patterns indicative of cancer. For example, convolutional neural networks (CNNs) can analyze medical images like X-rays and MRI, while decision tree or SVM can process the genetic data. These algorithms can help doctors make more accurate diagnoses [14],[16]. Random forest is an effective algorithm for cancer detection because it can capture complex relationships in the data, handle noise, and reduce the risk of overfitting [15],[16]. Parameter tuning and feature selection are crucial in optimizing the algorithm's performance for specific datasets and cancer types.
DL, as opposed to ML, processes enormous volumes of unstructured data using multi-layered structures known as neural networks (NN). Deep learning is a subfield of ML that involves employing complex algorithms and deep NN to train a model. Applications for DL are utilized in image colorization, self-driving automobiles, and robotics. Deep learning assists in making therapeutic decisions and greatly improves the precision with which malignant tumors in the human body are detected. GAN (generative adversarial network) is a deep learning model used to improve breast cancer identifications by generating synthetic mammographic images for screening purposes. This method helps to address the limitations of data scarcity and improves the robustness of detection algorithms [14]–[16]. For example, Rezaei et al. developed a hierarchical GAN method with an ensemble CNN for accurate nodule detection in lung cancer diagnosis with a 30% improvement in detection rate [17]. Similarly, Alruily et al. introduced a hybrid approach as shown in Figure 3 for augmentation and segmentation of breast ultrasound images using GAN to identify blocks and modified Net 3+, showing efficient results in both augmentation and segmentation steps [14]–[16],[18].
The conventional use of technology in diagnosis includes X-rays, MRIs, CT scans, Position Emission Tomography (PET) scans, ultrasounds, and biopsies that are then subjected to microscopic examination. Techniques such as cryo-electron microscopy, Infinium assay, robotic surgery, CRISPR (clustered regularly interspaced short palindromic repeats), cryoablation, and radiofrequency ablation contemporary tools are used for treatment [14]–[16],[19]. Table 1 shows the list of medical devices equipped with AI technology that have been approved by the US Food and Drug Administration (FDA) for use in cancer radiology-related applications.
Serial number | Year of approval | Name of the device | Description of the device and its role |
1. | 2015 | ClearRead CT (Riverain Technologies LLC.) | Providing support for reviewing chest multi-slice CT scans and identifying potential nodules that require a radiologist's attention. |
2. | Transpara (ScreenPoint Medical BV) | Aiding physicians in the interpretation of screening mammograms, aiding in the identification of suspicious areas indicative of breast cancer. | |
3. | 2016 | SmartTarget (SmartTarget Ltd.) | Participating in image-guided intervention and diagnostic procedures related to the prostate gland. |
4. | LungQ (Thirona Corp.) | Aiding in diagnosing and documenting abnormalities in pulmonary tissue images, specifically extracted from CT thoracic datasets. | |
5. | 2017 | AmCAD-US (AmCad BioMed Corporation) | A software designed to visualize and quantify ultrasound image data along with corresponding backscattered signals. |
6. | QuantX (Quantitative Insights) | An AI-enhanced diagnostic system designed to assist in achieving accurate diagnoses of breast cancer. | |
7. | Veye Chest (Aidence BV) | Assistance in the detection of pulmonary nodules from CT scans. | |
8. | 2018 | Arterys Oncology DL (Arterys) | An AI-powered, cloud-based medical imaging software designed to automatically measure and track lesions and nodules in both MRI and CT scans. |
9. | QVCAD (QView Medical Inc.) | An assistance tool aimed at detecting mammography-occult lesions in areas that were not initially identified as having suspicious findings. | |
10. | HealthMammo (Zebra Medical Vision Inc.) | Processing and analyzing mammograms to identify suspected lesions indicative of breast cancer. | |
11. | Arterys Oncology DL (Arterys Inc.) | Assisting in the oncological workflow by aiding users in confirming the presence or absence of lesions. This application supports anatomical datasets such as CT or MRI scans. | |
12. | AmCAD-UT (AmCad BioMed Corporation) | Providing support in the analysis of thyroid ultrasound images. | |
13. | Mia -Mammography Intelligent Assessment (KheironMedical Technologies Ltd.) | Offering assistance in the detection of breast cancer through the analysis of mammograms. | |
14. | Arterys MICA (Arterys) | A platform powered by AI for the analysis of medical images, including MRI and CT scans. | |
15. | SubtlePET (Subtle Medical) | An AI-driven technology that enables medical centers to provide quicker and safer patient scanning experiences, simultaneously improving exam throughput and provider profitability. | |
16. | 2019 | carriage (CureMetrix) | A software utilizing AI for the triage of mammography cases. |
17. | Deep Learning Image Reconstruction (GE Medical Systems) | ||
18. | Auto Lung Nodule Detection (Samsung Electronics Co. Ltd. (parent company: Samsung Group) | Breast cancer detection for diagnostic support from mammograms. | |
19. | JPC-01K (JLK Inspection Inc.) | Offering diagnostic support through the detection of prostate cancer using MRI. | |
20. | syngo.Breast Care (Siemens Healthcare GmbH (parent company: Siemens AG)) | Providing interpretation and reporting services to offer diagnostic support using mammograms. | |
21. | Aquilion ONE (TSX-305A/6) V8.9 with AiCE (Canon MedicalSystems Corporation) | A device capable of capturing and displaying cross-sectional volumes of the entire body, including the head, with the unique ability to image whole organs within a single rotation. | |
22. | ProFound AI for Digital Breast Tomosynthesis (iCAD Inc.) | A software device for computer-assisted detection and diagnosis (CAD) designed to aid in the interpretation of digital breast tomosynthesis (DBT) exams. | |
23. | RayCare 2.3 (RaySearch Laboratories) | An oncology information system is utilized to facilitate workflows, scheduling, and the management of clinical information for oncology care and post-treatment monitoring. | |
24. | Breast-SlimView (Hera-MI SAS) | Providing diagnostic support by detecting breast cancer through the analysis of mammograms. | |
25. | Vara (Merantix Healthcare GmbH) | Assistance in breast cancer screening and triage through the analysis of mammograms. | |
26. | ProFound AI Software V2.1 (iCAD) | A Computer aided design (CAD) software device was developed to be used simultaneously by interpreting physicians during the assessment of DBT images. | |
27. | 2019 | Transpara (ScreenPoint Medical) | A device designed to assist physicians concurrently while interpreting screening mammograms from compatible Full Field Digital Mammography (FFDM) systems. Its purpose is to help identify regions that appear suspicious for breast cancer and evaluate the likelihood of malignancy. |
28. | QyScore software (Qynapse SAS) | Automating the process of labeling, visualizing, and quantifying the volumes of segmentable brain structures and lesions from MRI images. | |
29. | 2020 | JBD-01K (JLK Inspection Inc.) | Providing diagnostic support through the detection of breast cancer using mammograms. |
30. | InferRead CT Lung (Beijing Infervision Technology Co. Ltd.) | A tool designed for lung cancer screening and management through the analysis of CT scans. | |
31. | b-box (X-rays GmbH) | Evaluating the quality of mammography images and determining breast density using mammograms. | |
32. | densitasAI (Densitas Inc.) | Offering support for the assessment of breast density using mammograms. | |
33. | Broncholab (Fluidda Inc) | Aiding in diagnosing and documenting abnormalities in pulmonary tissue images obtained from CT thoracic datasets. | |
34. | Syngo.CT Lung CAD (Siemens Medical Solutions Inc. (parent company: Siemens AG)) | Aiding in the detection of solid pulmonary nodules while reviewing multi-detector computed tomography (CT) exams of the chest. | |
35. | Genius AI Detection (Hologic, Inc.) | A software device designed to detect potential abnormalities in breast tomosynthesis images. | |
36. | MammoScreen (Therapixel SA) | Assisting in the identification of findings on screening FFDM acquired with compatible mammography systems and evaluating the level of suspicion associated with them. | |
37. | Visage Breast Density (Visage Imaging) | The software application is designed to be utilized alongside compatible full-field digital mammography systems, supporting radiologists in evaluating breast tissue composition. | |
38. | Imagio Breast Imaging System (Seno Medical Instruments, Inc.) | Enables an enhanced classification of breast masses in comparison to using ultrasound alone, incorporating AI-based software | |
39. | 2021 | Vivo Software Application (DiA Imaging Analysis Ltd.) | It provides an objective automated AI-based ejection fraction analysis. |
Vivo Software Application (DiA Imaging Analysis Ltd.) | It provides an objective automated AI-based ejection fraction analysis. | ||
40. | Vantage Galan 3T, MRT-3020, V6.0 with AiCE Reconstruction Processing Unit for MR Canon Medical Systems Corporation | Advanced intelligent Clear IQ Engine (AiCE) MRI deep learning reconstruction has been used for the construction of MR images. | |
41. | GI Genius Cosmo Artificial Intelligence - AI Ltd | It helps physicians detect colorectal polyps of various sizes, shapes, and morphologies. | |
42. | Chest-CAD Imagen Technologies, Inc | The average clinician missed the spot or showed a misinterpretation rate of 47%. This helps to identify the spot by using AI. | |
43. | 2022 | Precise Image (Philips Medical Systems Nederland, B.V.) | AI-powered reconstruction algorithm designed for the low radiation dose, which helps to improve the image appearance that closely resembles filtered back projection (FBP) at a higher dose. |
44. | Contour ProtegeAI MIM Software Inc. | It uses the machine learning algorithms for processing of CT images. | |
45. | Deep Learning Image Reconstruction GE Medical Systems | It uses the deep learning imaging reconstruction algorithm trained to eliminate image noise by leveraging MRI raw data. | |
46. | Ingenia, Ingenia CX, Ingenia Elition, Ingenia Ambition, MR 5300 and MR 7700 MR Systems Philips Medical Systems Nederland B.V. | It enables physicians to obtain cross-sectional and spectroscopic images. | |
47. | Brainomix 360 Triage ICH Brainomix Limited | It is a notification tool that provides real-time alerts to clinicians. |
Mainly, there are six imaging modalities available to treat human cancer: X-ray, PET, optical imaging, single-photon emission computed tomography (SPECT), Ultrasound (US), and MRI. Table 2 discusses the methods that play an important role in the detection of the infected part or organ where cancer cells are located. The use of AI has emerged as a transformative force in radiology and medical imaging, reshaping how medical professionals interpret and analyze complex imaging data. AI-driven algorithms offer the potential to enhance accuracy, efficiency, and diagnostic capabilities across a range of imaging modalities. In diagnostic radiology, AI aids in detecting and characterizing anomalies, from subtle lesions to intricate patterns, by leveraging DL techniques that learn from vast datasets. For instance, AI-powered systems have demonstrated remarkable performance in detecting abnormalities in chest X-rays, aiding in the early diagnosis of conditions like pneumonia and lung cancer. Recently, Google Health has developed a deep learning model that can distinguish between normal and abnormal chest radiographs across multiple datasets, including tuberculosis and COVID-19 cases [21]. Additionally, AI assists in the triage and prioritization of cases, optimizing workflow and improving patient care [22]–[25]. In medical imaging, AI holds potential in advanced modalities such as MRI and CT [26]. Zou et al. developed a new framework for dynamic reconstruction of MRI images. This framework splits the under-sample k-space measurements into two sub-datasets and uses them as inputs for two neural networks that pose the same structure but different weights [27]. Similarly, Artesani et al. explored the use of AI to revolutionize PET imaging. The study focuses on enhancing image quality, denoising, attenuation map generation, and quantification using deep learning techniques. Applications include cancer diagnosis and therapy, neurology, and cardiology [28]. AI algorithms are employed to enhance image quality, reduce artifacts, and expedite image reconstruction. Moreover, AI-driven image segmentation aids in the precise delineation of anatomical structures and assists in treatment planning, particularly in radiation therapy and surgical interventions. While the integration of AI in radiology offers transformative benefits, it also raises considerations related to algorithm robustness, clinical validation, and ethical implications, necessitating close collaboration between radiologists, data scientists, and regulatory bodies to harness AI's potential effectively [29],[30]. AI integration into radiology and medical imaging has the potential to revolutionize clinical practice, enhancing diagnostic accuracy, optimizing workflows, and ultimately improving patient outcomes. However, continued research, validation, and responsible implementation remain essential for unlocking AI's full potential in this field.
Imaging method | Application | Advantages | Disadvantages | References |
X-ray | X-rays are commonly used to detect tumors, bone abnormalities, and other abnormalities in the body. They are often used in combination with other imaging techniques to provide a more comprehensive view. | High sensitivity and specificity for detecting metabolic changes in cancer cells. | Limited spatial resolution compared to other imaging modalities. | Malik et al., 2023 and Miwa et al., 2017 [31],[32] |
Provides quantitative data on tracer uptake, aiding in treatment response assessment. | Requires the use of a cyclotron for on-site production of short-lived radionuclides. | |||
Can be combined with computed tomography (PET/CT) for precise anatomical localization. | Radiation exposure to patients and healthcare professionals due to the use of radionuclides. | |||
Widely used for staging, restaging, and monitoring therapy response. | ||||
MRI | MRI uses a strong magnetic field and radio waves to produce images of the body's internal structure. MRI is valuable for imaging soft tissues and can provide information about the extent and location of tumors. | Excellent soft tissue contrast, allowing for detailed anatomical visualization. | Relatively long imaging times can be challenging for some patients. | Aisen et al., 1986 and Siegel, 2001 [33],[34] |
No ionizing radiation, making it safer for repeated imaging. | Expensive equipment and higher operational costs. | |||
Can provide functional information through techniques like diffusion-weighted imaging (DWI) and dynamic contrast-enhanced MRI (DCE-MRI). | Limited availability in some regions. | |||
Suitable for imaging various parts of the body. | ||||
Ultrasound | Ultrasound uses high-frequency sound waves to create images of internal structures. It is commonly used to assess the size and characteristics of tumors, guide biopsies, and monitor treatment responses. | Real-time imaging with no ionizing radiation exposure. | Limited penetration through bone and air-filled structures. | Fischerova et al., 2011 [35] |
Non-invasive and widely available. | Limited ability to visualize soft tissues in deep body regions. | |||
Relatively low cost compared to other imaging modalities. | Operator-dependent and potential for variability in image quality. | |||
Suitable for guiding biopsies and minimally invasive procedures. | ||||
PET | PET scans involve injecting a small amount of radioactive material into the body, which accumulates in areas with high metabolic activity (such as cancer cells). The PET scanner detects the radiation emitted by the material and produces images that highlight these active areas. A PET scan is often combined with a CT scan (PET/CT) for more accurate localization. | High sensitivity and specificity for detecting metabolic changes in cancer cells. | Limited spatial resolution compared to other imaging modalities. | Czernin et al., 2002 [36] |
Provides quantitative data on tracer uptake, aiding in treatment response assessment. | Requires the use of a cyclotron for on-site production of short-lived radionuclides. | |||
Can be combined with computed tomography (PET/CT) for precise anatomical localization. | Radiation exposure to patients and healthcare professionals due to the use of radionuclides. | |||
Widely used for staging, restaging, and monitoring therapy response. | ||||
SPECT | SPECT is a nuclear medicine imaging technique that provides three-dimensional images of the distribution of radioactive substances in the patient's bloodstream. The radiotracer emits gamma rays, which are detected by a gamma camera as the patient is positioned within the SPECT scanner. SPECT is particularly useful for imaging internal organs and tissues, and it has applications in cancer detection and staging. | Useful for functional imaging of specific organs and tissues. | Lower spatial resolution compared to PET or CT. | Keown et al., 2020 [37] |
Provides valuable information on perfusion, blood flow, and receptor expression. | Longer imaging acquisition times compared to PET. | |||
Can be used with a variety of radiopharmaceuticals for different applications. | Limited sensitivity in detecting low-level tracer uptake. | |||
Sentinel lymph node mapping | In cancer staging, sentinel lymph nodes (the first lymph nodes to receive drainage from a tumor) are crucial indicators of cancer spread. Radioactive tracers are injected near the tumor, and nuclear imaging helps identify and biopsy these nodes, aiding in accurate staging. | Minimally invasive technique for identifying sentinel lymph nodes. | May result in false negatives due to the possibility of missing metastatic nodes. | Manca et al., 2016 and Petousis et al., 2022 [38],[39] |
Helps avoid unnecessary lymph node dissection in certain cancers. | In certain cases, sentinel nodes may not accurately represent the overall lymph node status. | |||
Accurate staging of cancer spread through lymphatic pathways. | ||||
Mammography | Mammography is a widely used technique for breast cancer detection and screening. It involves X-ray imaging of the breast tissue to identify abnormalities such as masses or microcalcifications that may indicate the presence of cancer. | Effective for detecting breast cancer at early stages, especially in older women. | Limited sensitivity in dense breast tissue, especially in younger women. | Pisano et al., 2006 [40],[41] |
Wide availability and established screening programs. | Potential discomfort during compression for some patients. | |||
Relatively low radiation exposure. | May result in false positives that require additional testing and anxiety. | |||
Can detect small calcifications associated with early breast cancers. | ||||
Thermography | Thermography detects changes in skin temperature, and it has been explored as an adjunctive tool for breast cancer screening. Increased blood flow and metabolic activity in tumors can cause temperature differences, which thermography aims to visualize. | Non-invasive and no ionizing radiation exposure. | Limited sensitivity and specificity compared to other imaging modalities. | Arora et al., 2008 [42] |
Can detect temperature changes associated with increased blood flow in some tumors. | Variability in results due to external factors like room temperature. | |||
Can be used as an adjunctive tool for breast cancer detection. | Not widely accepted as a primary screening tool due to its limitations. | |||
Microscopy | Microscopy, including light microscopy and electron microscopy, is used for detailed examination of tissue samples obtained through biopsies. It provides insights into cellular and tissue morphology, helping pathologists identify cancerous changes and characterize tumors. | Provides high-resolution imaging of tissue samples at the cellular and subcellular level. | Invasive technique requiring tissue samples (biopsies). | Kumar et al., 2014 and Mills et al., 2006 [43],[44] |
Can reveal detailed morphological and histological information. | Limited to ex vivo analysis and may not capture dynamic processes. | |||
Important for diagnostic confirmation and understanding tumor characteristics. | Labor-intensive and time-consuming for comprehensive analysis. | |||
Radionuclide bone scans | Bone scans using radiolabeled bisphosphonates or phosphonates help detect metastatic bone disease. They can identify areas of increased bone turnover, indicating the presence of cancer metastases. | Sensitive for detecting bone metastases and assessing overall skeletal health. | Limited anatomical detail compared to CT or MRI. | Coleman et al., 2001 [45] |
Allows visualization of multiple skeletal sites in a single scan. | Cannot distinguish between active cancer lesions and non-malignant conditions like arthritis. | |||
Can provide early detection of bone metastases before they become symptomatic. | High sensitivity can lead to false-positive findings. | |||
Thyroid cancer imaging | Radioactive iodine (iodine-131) is used to diagnose and treat thyroid cancer. Thyroid cancer cells take up iodine, allowing for imaging and targeted treatment. | Effective for imaging thyroid tissue and thyroid cancer metastases. | Limited application to thyroid and thyroid-related conditions. | Jin et al., 2018 and Brose et al., 2012 [46],[47] |
Allows for targeted therapy using radioactive iodine-131. | The long half-life of iodine-131 requires special precautions for patient and public safety. | |||
Not suitable for cancers that do not take up iodine, such as some types of thyroid cancer. | ||||
Peptide receptor radionuclide therapy (PRRT) | PRRT involves targeting cancer cells that overexpress specific receptors with radiolabeled peptides. It is used for neuroendocrine tumors and some types of prostate cancer. | Targets specific receptors on cancer cells, reducing damage to normal tissues. | Limited to tumors that express the specific receptors targeted by the radiolabeled peptides. | Strosberg et al., 2017 [48] |
Can provide palliative treatment for certain types of neuroendocrine tumors. | Long imaging and treatment times due to the radioactive decay of the radionuclides. | |||
Offers a personalized approach to cancer treatment. | Potential side effects related to radiation exposure and peptide therapy. |
In histopathology, the examination of tissue specimens for cancer diagnosis and grading has traditionally been a labor-intensive process (Figure 4). AI-driven algorithms have significantly expedited this process by automating tasks such as tumor segmentation, cell classification, and identification of histological features [5]. Table 3 represents the list of pathology tools used for the detection of cancer. DL models trained on extensive pathological image datasets have demonstrated comparable performance to expert pathologists in diagnosing various cancers, including breast, lung, and prostate cancers [5],[49]. The integration of AI into pathology not only accelerates diagnosis but also enhances consistency and reduces inter-observer variability, thereby contributing to improved patient care [49].
Recent advancements in AI have led to the development of computer-aided CAD systems that assist pathologists in identifying subtle morphological patterns indicative of malignancy [5],[49]. These AI systems can highlight regions of interest, such as potential tumor areas, for pathologists to review, enhancing their efficiency and accuracy. Moreover, AI-powered algorithms aid in predicting disease prognosis and guiding treatment decisions based on histopathological features [49]. By analyzing a multitude of data points within pathology slides, AI provides valuable insights into disease progression, enabling more informed and personalized therapeutic strategies.
Tool/modality | Description | Limit of detection | Applications | Reference |
Histopathology | Examination of tissues under a microscope to identify disease. | Single cells | Tissue analysis for various cancers. | [50] |
Cytology | Study of individual cells to detect abnormalities. | Single cells | Screening for cervical and other cancers. | [51] |
Immunohistochemistry (IHC) | Use of antibodies to detect specific antigens in cells of a tissue section. | Protein expression levels | Determining cancer subtypes and prognosis. | [50] |
Molecular pathology | Study of molecules within organs, tissues, or bodily fluids. | Varies by assay | Genetic mutations, gene expression. | [52] |
Genetic testing | Analysis of DNA, RNA, chromosomes, proteins, and certain metabolites. | Single nucleotide changes | Hereditary cancer syndromes, targeted therapy decisions. | [52] |
Liquid biopsy | A non-invasive test that detects cancer cells or their DNA in blood. | Circulating tumor DNA | Monitoring, early detection of cancer. | [51] |
Tumor marker tests | Blood tests can help to identify the presence of certain types of cancer. | Varies by marker | Prognosis, monitoring treatment response. | [52] |
The field of cancer genomics has benefited immensely from AI-driven data analysis. Large-scale genomic datasets, encompassing information on genetic mutations, gene expression, and molecular pathways, have provided a wealth of information for understanding cancer biology and identifying potential therapeutic targets [53]. AI algorithms excel in identifying subtle genetic patterns associated with cancer predisposition, allowing for the identification of individuals at higher risk of developing specific cancers. Furthermore, AI-powered models enable the identification of biomarkers that predict treatment response and guide the selection of targeted therapies, leading to more effective and individualized treatment strategies [15],[54]–[58]. As an example of AI's impact on cancer genomics, researchers have employed ML algorithms to analyze genomics data and identify driver mutations in cancer genomes [59]. These driver mutations play a crucial role in the initiation and progression of cancer. AI's ability to shift through vast genomic datasets has accelerated the identification of rare and previously unknown mutations that contribute to cancer development [60].
Additionally, AI has been pivotal in deciphering complex gene expression patterns that characterize different cancer subtypes [61]. By categorizing patients based on the unique molecular signatures of their tumors, AI algorithms assist in tailoring treatment regimens that align with the genetic makeup of the tumor and the patient's predicted response [62]. Similarly, multi-omics is an integrative approach that examines the datasets of various “omic” layers, such as genomics, proteomics, and metabolomics, to gain a comprehensive understanding of biological processes and disease mechanisms. This approach is pivotal in personalized medicine as it allows for the analysis of how genes, proteins, and other molecules interact within a cell or organism, and how these interactions are altered in disease states [63],[64]. Recently, multi-omics and AI have been used for the advancement of personalized medicine, detecting novel subtypes, and predicting treatment responses. Wang et al. introduced AI to analyze multi-omics data from breast cancer patients. They found novel cancer subtypes with distinct therapeutic exposure, opening a new way for targeted therapies based on an individual's specific cancer biology condition. The combined power of multi-omics and AI holds immense promise for revolutionizing healthcare towards a more personalized and effective approach.
AI-driven clinical decision support systems have gained prominence in guiding treatment planning and patient management. These systems utilize patient data, including clinical history, imaging results, and molecular profiles, to assist oncologists in making informed decisions about treatment options [65]. ML algorithms analyze patient-specific data to predict treatment responses, adverse effects, and disease progression, facilitating personalized and adaptive treatment plans [66],[67]. Such real-time insights optimize therapeutic efficacy and minimize unnecessary interventions, ultimately enhancing patient quality of life. Moreover, AI's potential is not limited to primary diagnosis; it extends to image-guided interventions. AI-powered image registration and fusion techniques enhance the precision of minimally invasive procedures, enabling accurate targeting of tumors and reducing the risk of complications [68]. For instance, AI-driven navigation systems enhance the accuracy of needle biopsies and radiofrequency ablations, improving the success rates of these procedures [9]. These innovations underscore AI's role in bridging the gap between diagnostics and treatment, revolutionizing the continuum of cancer care.
In addition to treatment decisions, AI aids in prognosis assessment. By analyzing multi-modal patient data, including imaging, clinical reports, and genomics, AI models can provide prognostic insights for cancer outcomes, helping clinicians understand disease trajectories and tailor follow-up strategies [69]. Furthermore, AI assists in the discovery of potential therapeutic targets by analyzing intricate interactions within molecular pathways and identifying druggable vulnerabilities in cancer cells [70]. Through integration with high-throughput technologies, AI expedites drug discovery and development, potentially leading to novel therapeutic agents with enhanced efficacy and reduced side effects [3].
The AI-assisted clinical decision support system can enhance the performance of screening mammography images in the identification of cancer and non-cancerous cells. Dembrower et al. identified that when AI is combined with radiologists, an increase of 21% is observed in the number of examinations with abnormal interpretation [71]. The study noted that AI and human experts can perceive different image features as cancer cells. Hence, the combination of AI and radiologists can increase the sensitivity of the detection of cancer [71]. Fan et al. explored the use of AI in detecting hematological cancers such as leukemia and lymphoma from digitalized images of peripheral blood films [72].
While the potential of AI in cancer detection is substantial, a lot of challenges persist in AI-driven healthcare systems. Ethical considerations, data privacy concerns, and algorithm bias necessitate vigilant oversight and collaboration among healthcare professionals, data scientists, and regulatory bodies [73],[74]. The security and confidentiality of patients are major and challenging issues in AI-driven healthcare systems. The frequency of unauthorized access and data breaches has shown a significant spike in recent decades and can hurt this ecosystem. The guidelines need to be established by the concerned authorities across the globe, which should also lead to accountability at each level. Also, ethical concerns regarding the application of AI tools pose a serious issue in front of society. The recommendations made by the AI tools need to be critically evaluated by clinical experts before implementation for the best interests of the patients [75].
The major challenge of AI in cancer diagnosis is the lack of standardization of algorithms with data analysis and collection. It is very difficult to compare the results across different studies, which hinders the development of reliable algorithms. The requirement for transparent and interpretable AI models presents another difficulty [76]. The algorithms exhibit bias that, many times, gets amplified by using AI tools, which is a serious challenge in AI-driven healthcare systems. The AI algorithm that gets trained on some specific population does not necessarily perform best for other populations present in some different regions [77]. There can be several levels at which AI algorithms can be improved, such as data collection and preparation methods, model development and validation, model deployment in a clinical environment, types of patients, and regions of the deployment [78]. There is a requirement for more studies that discuss this challenging issue and suggest some level at which the biases in models can be accepted. The data collection methods favoring certain types of populations may also lead to bias in the model. It must be ensured that the training dataset of the ML model has representation from all populations to reduce the disparities in the result. The validation of results obtained from the model must be thoroughly examined across diverse classes or populations to reduce the bias in the model outcome.
For clinicians to trust AI models and use them in clinical decision-making, they must be able to explain how the models arrived at their diagnosis of cancer. The generalizability of AI algorithms across diverse patient populations and healthcare settings is crucial to ensure equitable access to AI-powered diagnostic tools [73],[79]. Furthermore, efforts to enhance algorithm interpretability and address issues related to transparency and trust are imperative for widespread clinical adoption [80],[81]. One major obstacle to AI research and algorithm development has been the absence of big, publicly available, well-annotated cancer datasets. Validation and reproducibility in cancer research are hampered by the absence of benchmarking datasets. Another difficulty is creating reliable algorithms that can manage complicated data [79]. The validation and reproducibility of AI-driven cancer research can be promoted by incorporating open data-sharing policies by various research facilities and institutions working across the globe on AI-based cancer detection. Collaborative research among doctors and the scientific community can lead to innovative solutions to the problem of cancer detection using AI tools. The domain of AI-based cancer detection faces several constraints such as regulatory compliance, inflexible healthcare systems, and difficulties in practical implementations. The absence of frameworks for the standardization of cancer-related health data also presents a significant obstacle in the development of AI models [79].
Despite several challenges and limitations, the use of AI in cancer detection has a bright future ahead. AI can, for instance, be used to evaluate vast volumes of data from many sources, such as imaging, proteomics, and genetic data, to find novel biomarkers for the diagnosis and treatment of cancer. AI is also capable of creating customized treatment programs according to a patient's individual genetic profile and medical background. Additionally, by lowering false positives and false negatives, AI can be used to increase the accuracy of cancer screening procedures like mammography and colonoscopy [82].
AI in oncology has catalyzed a paradigm shift, guiding in a new era of cancer detection, diagnosis, and treatment. Its applications across various domains such as radiology, pathology, genomics, and clinical decision–support systems hold immense promise in reshaping patient care and outcomes. Beyond just augmenting human capabilities, AI is fundamentally modifying the healthcare landscape by enabling precise and efficient analysis of medical images, genetic intricacies, and treatment responses. These capabilities not only rationalize the diagnostic process but also revolutionize treatment strategies. AI-powered risk-prediction models and advanced imaging techniques enable clinicians to detect cancer at its earliest stages, significantly improving survival rates and patient prognosis. Moreover, the arrival of precision medicine driven by AI allows the customization of interventions based on individual genetic and molecular profiles, optimizing treatment efficacy while minimizing adverse effects and enhancing overall quality of life. However, to fully realize the potential of AI in clinical practice, several challenges must be addressed, including ensuring data privacy, mitigating biases in algorithmic decision-making, and maintaining transparency and accountability. As AI technologies continue to evolve, the implementation of explainable AI, robust validation protocols, and ethical guidelines will be pivotal in fostering responsible and widespread adoption of AI-powered solutions in oncology.
Collaborative efforts among healthcare stakeholders, data scientists, and regulatory bodies are essential for advancing AI's role in cancer detection and facilitating patient-centered care. In conclusion, the integration of AI in cancer detection represents a transformative moment in healthcare. By leveraging AI to enhance early diagnosis, tailor treatment strategies, and improve patient outcomes, we can redefine the approach to cancer care and move towards a future where timely interventions and personalized treatments are the standard of care.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
[1] |
M. G. P. Rosa, Visual maps in the adult primate cerebral cortex: Some implications for brain development and evolution, Braz. J. Med. Biol. Res., 35 (2002), 1485–1498. https://doi.org/10.1590/S0100-879X2002001200008 doi: 10.1590/S0100-879X2002001200008
![]() |
[2] |
H. Strasburger, On the cortical mapping function-Visual space, cortical space, and crowding, Vision Res., 194 (2022), 107972. https://doi.org/10.1016/j.visres.2021.107972 doi: 10.1016/j.visres.2021.107972
![]() |
[3] |
E. L. Schwartz, Computational anatomy and functional architecture of striate cortex: A spatial mapping approach to perceptual coding, Vision Res., 20 (1980), 645–669. https://doi.org/10.1016/0042-6989(80)90090-5 doi: 10.1016/0042-6989(80)90090-5
![]() |
[4] |
C. Bordier, J. M. Hupé, M. Dojat, Quantitative evaluation of fMRI retinotopic maps, from V1 to V4, for cognitive experiments, Front. Hum. Neurosci., 9 (2015), 277. https://doi.org/10.3389/fnhum.2015.00277 doi: 10.3389/fnhum.2015.00277
![]() |
[5] |
J. Larsson, D. J. Heeger, Two retinotopic visual areas in human lateral occipital cortex, J. Neurosci., 26 (2006), 13128–13142. https://doi.org/10.1523/JNEUROSCI.1657-06.2006 doi: 10.1523/JNEUROSCI.1657-06.2006
![]() |
[6] |
D. M. van Es, W. van der Zwaag, T. Knapen, Topographic maps of visual space in the human cerebellum, Curr. Biol., 29 (2019), 1689–1694. https://doi.org/10.1016/j.cub.2019.04.012 doi: 10.1016/j.cub.2019.04.012
![]() |
[7] |
C. A. Olman, P. F. Van de Moortele, J. F. Schumacher, J. R. Guy, K. Uǧurbil, E. Yacoub, Retinotopic mapping with spin echo BOLD at 7T, Magn. Reson. Imaging, 28 (2010), 1258–1269. https://doi.org/10.1016/j.mri.2010.06.001 doi: 10.1016/j.mri.2010.06.001
![]() |
[8] |
B. A. Wandell, J. Winawer, Imaging retinotopic maps in the human brain, Vision Res., 51 (2011), 718–737. https://doi.org/10.1016/j.visres.2010.08.004 doi: 10.1016/j.visres.2010.08.004
![]() |
[9] |
A. C. Huk, D. Ress, D. J. Heeger, Neuronal basis of the motion aftereffect reconsidered, Neuron, 32 (2001), 161–172. https://doi.org/10.1016/S0896-6273(01)00452-4 doi: 10.1016/S0896-6273(01)00452-4
![]() |
[10] |
K. Grill-Spector, T. Kushnir, T. Hendler, S. Edelman, Y. Itzchak, R. Malach, A sequence of object-processing stages revealed by fMRI in the human occipital lobe, Hum. Brain Mapp., 6 (1998), 316–328. https://doi.org/10.1002/(SICI)1097-0193(1998)6:4<316::AID-HBM9>3.0.CO;2-6 doi: 10.1002/(SICI)1097-0193(1998)6:4<316::AID-HBM9>3.0.CO;2-6
![]() |
[11] |
S. Engel, X. Zhang, B. Wandell, Colour tuning in human visual cortex measured with functional magnetic resonance imaging, Nature, 388 (1997), 68–71. https://doi.org/10.1038/40398 doi: 10.1038/40398
![]() |
[12] |
R. Hartig, C. Battal, G. Chávez, A. Vedoveli, T. Steudel, E. Krampe, et al., Topographic mapping of the primate primary interoceptive cortex, Front. Neurosci., 11 (2017). https://doi.org/10.3389/conf.fnins.2017.94.00005 doi: 10.3389/conf.fnins.2017.94.00005
![]() |
[13] |
J. A. Bourne, M. G. P. Rosa, Hierarchical development of the primate visual cortex, as revealed by neurofilament immunoreactivity: Early maturation of the middle temporal area (MT), Cereb. Cortex, 16 (2006), 405–414. https://doi.org/10.1093/cercor/bhi119 doi: 10.1093/cercor/bhi119
![]() |
[14] |
S. Nishida, T. Kawabe, M. Sawayama, T. Fukiage, Motion perception: From detection to interpretation, Annu. Rev. Vis. Sci., 4 (2018), 501–523. https://doi.org/10.1146/annurev-vision-091517-034328 doi: 10.1146/annurev-vision-091517-034328
![]() |
[15] |
T. D. Albright, G. R. Stoner, Visual motion perception, Proc. Natl. Acad. Sci. U. S. A., 92 (1995), 2433–2440. https://doi.org/10.1073/pnas.92.7.2433 doi: 10.1073/pnas.92.7.2433
![]() |
[16] |
A. M. Derrington, H. A. Allen, L. S. Delicato, Visual mechanisms of motion analysis and motion perception, Annu. Rev. Psychol., 55 (2004), 181–205. https://doi.org/10.1146/annurev.psych.55.090902.141903 doi: 10.1146/annurev.psych.55.090902.141903
![]() |
[17] |
J. D. Herrington, S. Baron-Cohen, S. J. Wheelwright, K. D. Singh, E. T. Bullmore, M. Brammer, et al., The role of MT+/V5 during biological motion perception in Asperger Syndrome: An fMRI study, Res. Autism Spectr. Disord., 1 (2007), 14–27. https://doi.org/10.1016/j.rasd.2006.07.002 doi: 10.1016/j.rasd.2006.07.002
![]() |
[18] |
A. Antal, M. A. Nitsche, W. Kruse, T. Z. Kincses, K. P. Hoffmann, W. Paulus, Direct current stimulation over V5 enhances visuomotor coordination by improving motion perception in humans, J. Cogn. Neurosci., 16 (2004), 521–527. https://doi.org/10.1162/089892904323057263 doi: 10.1162/089892904323057263
![]() |
[19] |
R. Laycock, D. P. Crewther, P. B. Fitzgerald, S. G. Crewther, Evidence for fast signals and later processing in human V1/V2 and V5/MT+: A TMS study of motion perception, J. Neurophysiol., 98 (2007), 1253–1262. https://doi.org/10.1152/jn.00416.2007 doi: 10.1152/jn.00416.2007
![]() |
[20] |
D. Tadin, J. Silvanto, A. Pascual-Leone, L. Battelli, Improved motion perception and impaired spatial suppression following disruption of cortical area MT/V5, J. Neurosci., 31 (2011), 1279–1283. https://doi.org/10.1523/JNEUROSCI.4121-10.2011 doi: 10.1523/JNEUROSCI.4121-10.2011
![]() |
[21] |
J. P. H. van Santen, G. Sperling, Elaborated Reichardt detectors, J. Opt. Soc. Am. A, 2 (1985), 300. https://doi.org/10.1364/josaa.2.000300 doi: 10.1364/josaa.2.000300
![]() |
[22] |
E. H. Adelson, J. R. Bergen, Spatiotemporal energy models for the perception of motion, J. Opt. Soc. Am. A, 2 (1985), 284. https://doi.org/10.1364/josaa.2.000284 doi: 10.1364/josaa.2.000284
![]() |
[23] |
M. Mashour, The information basis in the perception of velocity, Acta Psychol. (Amst)., 48 (1981), 69–78. https://doi.org/10.1016/0001-6918(81)90049-4 doi: 10.1016/0001-6918(81)90049-4
![]() |
[24] |
D. Algom, L. Cohen-Raz, Visual velocity input-output functions: The integration of distance and duration onto subjective velocity, J. Exp. Psychol. Hum. Percept. Perform., 10 (1984), 486–501. https://doi.org/10.1037/0096-1523.10.4.486 doi: 10.1037/0096-1523.10.4.486
![]() |
[25] |
D. J. Hagler, M. I. Sereno, Spatial maps in frontal and prefrontal cortex, Neuroimage, 29 (2006), 567–577. https://doi.org/10.1016/j.neuroimage.2005.08.058 doi: 10.1016/j.neuroimage.2005.08.058
![]() |
[26] |
S. D. Slotnick, S. A. Klein, T. Carney, E.E. Sutter, Electrophysiological estimate of human cortical magnification, Clin. Neurophysiol., 112 (2001), 1349–1356. https://doi.org/10.1016/S1388-2457(01)00561-2 doi: 10.1016/S1388-2457(01)00561-2
![]() |
[27] |
J. Swearer, Visual Angle, Encycl. Clin. Neuropsychol., (2011), 2626–2627. https://doi.org/10.1007/978-0-387-79948-3_1411 doi: 10.1007/978-0-387-79948-3_1411
![]() |
[28] |
A. Cowey, E. T. Rolls, Human cortical magnification factor and its relation to visual acuity, Exp. Brain Res., 21 (1974), 447–454. https://doi.org/10.1007/BF00237163 doi: 10.1007/BF00237163
![]() |
[29] |
H. L. Ansbacher, Distortion in the perception of real movement, J. Exp. Psychol., 34 (1944), 1–23. https://doi.org/10.1037/h0061686 doi: 10.1037/h0061686
![]() |
[30] |
S. Kaneko, I. Murakami, Perceived duration of visual motion increases with speed, J. Vis., 9 (2009), 1–12. https://doi.org/10.1167/9.7.14 doi: 10.1167/9.7.14
![]() |
[31] |
K. R. Sitek, O. F. Gulban, E. Calabrese, G. A. Johnson, A. Lage-Castellanos, M. Moerel, et al., Mapping the human subcortical auditory system using histology, postmortem MRI and in vivo MRI at 7T, Elife, 8 (2019), 1–36. https://doi.org/10.7554/eLife.48932 doi: 10.7554/eLife.48932
![]() |
[32] |
P. J. Basser, J. Mattiello, D. Lebihan, Estimation of the effective self-diffusion tensor from the NMR spin echo, J. Magn. Reson. Ser. B, 103 (1994), 247–254. https://doi.org/10.1006/jmrb.1994.1037 doi: 10.1006/jmrb.1994.1037
![]() |
[33] |
L. Fan, H. Li, J. Zhuo, Y. Zhang, J. Wang, L. Chen, et al., The human brainnetome atlas: A new brain atlas based on connectional architecture, Cereb. Cortex, 26 (2016), 3508–3526. https://doi.org/10.1093/cercor/bhw157 doi: 10.1093/cercor/bhw157
![]() |
[34] |
R. O. Duncan, G. M. Boynton, Cortical magnification within human primary visual cortex correlates with acuity thresholds, Neuron, 38 (2003), 659–671. https://doi.org/10.1016/S0896-6273(03)00265-4 doi: 10.1016/S0896-6273(03)00265-4
![]() |
[35] |
J. C. Horton, W. F. Hoyt, The representation of the visual field in human striate cortex: A revision of the classic Holmes map, Arch. Ophthalmol., 109 (1991), 816–824. https://doi.org/10.1001/archopht.1991.01080060080030 doi: 10.1001/archopht.1991.01080060080030
![]() |
[36] |
J. Rovamo, V. Virsu, An estimation and application of the human cortical magnification factor, Exp. Brain Res., 37 (1979), 495–510. https://doi.org/10.1007/BF00236819 doi: 10.1007/BF00236819
![]() |
[37] |
M. M. Schira, A. R. Wade, C. W. Tyler, Two-dimensional mapping of the central and parafoveal visual field to human visual cortex, J. Neurophysiol., 97 (2007), 4284–4295. https://doi.org/10.1152/jn.00972.2006 doi: 10.1152/jn.00972.2006
![]() |
[38] | D. J. Tolhurst, L. Ling, Magnification factors and the organization of the human striate cortex, Hum. Neurobiol., 6 (1988), 247–254. |
[39] | J. Mate, A. C. Pires, G. Campoy, S. Estaún, Estimating the duration of visual stimuli in motion environments., Psicológica, 30 (2009), 287–300. |
[40] |
H. Karşilar, Y. D. Kisa, F. Balci, Dilation and constriction of subjective time based on observed walking speed, Front. Psychol., 9 (2018), 2565. https://doi.org/10.3389/fpsyg.2018.02565 doi: 10.3389/fpsyg.2018.02565
![]() |
[41] | S. W. brown, Time, change, and motion: The effects of stimulus movement on temporal perception, Percept. Psychophys., 57 (1995), 105–116. https://doi.org/10.3758/BF03211853 |
[42] |
A. Petzold, E. Pitz, The historical origin of the pulfrich effect: A serendipitous astronomic observation at the border of the Milky Way, Neuro-Ophthalmology, 33 (2009), 39–46. https://doi.org/10.1080/01658100802590829 doi: 10.1080/01658100802590829
![]() |
[43] |
J. A. Wilson, S. M. Anstis, Visual delay as a function of luminance, Am. J. Psychol., 82 (1969), 350–358. https://doi.org/10.2307/1420750 doi: 10.2307/1420750
![]() |
[44] |
A. Reynaud, R. F. Hess, Interocular contrast difference drives illusory 3D percept, Sci. Rep., 7 (2017), 1–6. https://doi.org/10.1038/s41598-017-06151-w doi: 10.1038/s41598-017-06151-w
![]() |
[45] |
N. Qian, R. A. Andersen, A physiological model for motion-stereo integration and a unified explanation of Pulfrich-like phenomena, Vision Res., 37 (1997), 1683–1698. https://doi.org/10.1016/S0042-6989(96)00164-2 doi: 10.1016/S0042-6989(96)00164-2
![]() |
[46] |
A. Anzai, I. Ohzawa, R. D. Freeman, Joint-encoding of motion and depth by visual cortical neurons: Neural basis of the Pulfrich effect, Nat. Neurosci., 4 (2001), 513–518. https://doi.org/10.1038/87462 doi: 10.1038/87462
![]() |
[47] |
A. A. L. D'Alfonso, J. Van Honk, D. J. L. G. Schutter, A. R. Caffé, A. Postma, E. H. F. De Haan, Spatial and temporal characteristics of visual motion perception involving V5 visual cortex, Neurol. Res., 24 (2002), 266–270. https://doi.org/10.1179/016164102101199891 doi: 10.1179/016164102101199891
![]() |
[48] |
G. Beckers, S. Zeki, The consequences of inactivating areas V1 and V5 on visual motion perception, Brain, 118 (1995), 49–60. https://doi.org/10.1093/brain/118.1.49 doi: 10.1093/brain/118.1.49
![]() |
[49] |
R. Laycock, D. P. Crewther, P. B. Fitzgerald, S. G. Crewther, Evidence for fast signals and later processing in human V1/V2 and V5/MT+: A TMS study of motion perception, J. Neurophysiol., 98 (2007), 1253–1262. https://doi.org/10.1152/jn.00416.2007 doi: 10.1152/jn.00416.2007
![]() |
[50] |
K. Spang, M. Morgan, Cortical correlates of stereoscopic depth produced by temporal delay, J. Vis., 8 (2008), 1–12. https://doi.org/10.1167/8.9.10 doi: 10.1167/8.9.10
![]() |
[51] |
R. A. Andersen, G. K. Essick, R. M. Siegel, Encoding of spatial location by posterior parietal neurons, Science, 230 (1985), 456–458. https://doi.org/10.1126/science.4048942 doi: 10.1126/science.4048942
![]() |
[52] |
A. M. Ferrandez, L. Hugueville, S. Lehéricy, J. B. Poline, C. Marsault, V. Pouthas, Basal ganglia and supplementary motor area subtend duration perception: An fMRI study, Neuroimage, 19 (2003), 1532–1544. https://doi.org/10.1016/S1053-8119(03)00159-9 doi: 10.1016/S1053-8119(03)00159-9
![]() |
[53] |
D. L. Harrington, K. Y. Haaland, R. T. Knight, Cortical networks underlying mechanisms of time perception, J. Neurosci., 18 (1998), 1085–1095. https://doi.org/10.1523/jneurosci.18-03-01085.1998 doi: 10.1523/jneurosci.18-03-01085.1998
![]() |
[54] |
R. B. Ivry, R. M. C. Spencer, The neural representation of time, Curr. Opin. Neurobiol., 14 (2004), 225–232. https://doi.org/10.1016/j.conb.2004.03.013 doi: 10.1016/j.conb.2004.03.013
![]() |
[55] |
P. Janssen, M. N. Shadlen, A representation of the hazard rate of elapsed time in macaque area LIP, Nat. Neurosci., 8 (2005), 234–241. https://doi.org/10.1038/nn1386 doi: 10.1038/nn1386
![]() |
[56] |
M. Jazayeri, M. N. Shadlen, A neural mechanism for sensing and reproducing a time interval, Curr. Biol., 25 (2015), 2599–2609. https://doi.org/10.1016/j.cub.2015.08.038 doi: 10.1016/j.cub.2015.08.038
![]() |
[57] |
C. S. Konen, S. Kastner, Representation of eye movements and stimulus motion in topographically organized areas of human posterior parietal cortex, J. Neurosci., 28 (2008), 8361–8375. https://doi.org/10.1523/JNEUROSCI.1930-08.2008 doi: 10.1523/JNEUROSCI.1930-08.2008
![]() |
[58] |
M. I. Leon, M. N. Shadlen, Representation of time by neurons in the posterior parietal cortex of the macaque, Neuron, 38 (2003), 317–327. https://doi.org/https://doi.org/10.1016/S0896-6273(03)00185-5 doi: 10.1016/S0896-6273(03)00185-5
![]() |
[59] |
P. A. Lewis, R. C. Miall, Distinct systems for automatic and cognitively controlled time measurement: Evidence from neuroimaging, Curr. Opin. Neurobiol., 13 (2003), 250–255. https://doi.org/10.1016/S0959-4388(03)00036-9 doi: 10.1016/S0959-4388(03)00036-9
![]() |
[60] |
H. Onoe, M. Komori, K. Onoe, H. Takechi, H. Tsukada, Y. Watanabe, Cortical networks recruited for time perception: A monkey positron emission tomography (PET) study, Neuroimage, 13 (2001), 37–45. https://doi.org/10.1006/nimg.2000.0670 doi: 10.1006/nimg.2000.0670
![]() |
[61] | F. Protopapa, M. J. Hayashi, S. Kulashekhar, W. Van Der Zwaag, G. Battistella, M. M. Murray, et al., Chronotopic maps in human supplementary motor area, 17 (2019), e3000026. https://doi.org/10.1371/journal.pbio.3000026 |
[62] |
H. Sakata, M. Kusunoki, Organization of space perception: neural representation of three-dimensional space in the posterior parietal cortex, Curr. Opin. Neurobiol., 2 (1992), 170–174. https://doi.org/10.1016/0959-4388(92)90007-8 doi: 10.1016/0959-4388(92)90007-8
![]() |
[63] |
J. G. Mikhael, S. J. Gershman, Adapting the flow of time with dopamine, J. Neurophysiol., 121 (2019), 1748–1760. https://doi.org/10.1152/jn.00817.2018 doi: 10.1152/jn.00817.2018
![]() |
[64] |
T. Liu, P. Hu, R. Cao, X. Ye, Y. Tian, X. Chen, et al., Dopaminergic modulation of biological motion perception in patients with Parkinson's disease, Sci. Rep., 7 (2017), 1–9. https://doi.org/10.1038/s41598-017-10463-2 doi: 10.1038/s41598-017-10463-2
![]() |
[65] |
C. Gratton, S. Yousef, E. Aarts, D. L. Wallace, M. D'Esposito, M. A. Silver, Cholinergic, but not dopaminergic or noradrenergic, enhancement sharpens visual spatial perception in humans, J. Neurosci., 37 (2017), 4405–4415. https://doi.org/10.1523/JNEUROSCI.2405-16.2017 doi: 10.1523/JNEUROSCI.2405-16.2017
![]() |
[66] |
S. Threlfell, M. A. Clements, T. Khodai, I. S. Pienaar, R. Exley, J. Wess, et al., Striatal muscarinic receptors promote activity dependence of dopamine transmission via distinct receptor subtypes on cholinergic interneurons in ventral versus dorsal striatum, J. Neurosci., 30 (2010), 3398–3408. https://doi.org/10.1523/JNEUROSCI.5620-09.2010 doi: 10.1523/JNEUROSCI.5620-09.2010
![]() |
[67] |
S. Threlfell, T. Lalic, N. J. Platt, K. A. Jennings, K. Deisseroth, S. J. Cragg, Striatal dopamine release is triggered by synchronized activity in cholinergic interneurons, Neuron, 75 (2012), 58–64. https://doi.org/10.1016/j.neuron.2012.04.038 doi: 10.1016/j.neuron.2012.04.038
![]() |
[68] |
E. D. Abercrombie, P. DeBoer, Substantia nigra D1 receptors and stimulation of striatal cholinergic interneurons by dopamine: A proposed circuit mechanism, J. Neurosci., 17 (1997), 8498–8505. https://doi.org/10.1523/jneurosci.17-21-08498.1997 doi: 10.1523/jneurosci.17-21-08498.1997
![]() |
[69] |
B. Di Cara, F. Panayi, A. Gobert, A. Dekeyne, D. Sicard, L. De Groote, et al., Activation of dopamine D1 receptors enhances cholinergic transmission and social cognition: A parallel dialysis and behavioural study in rats, Int. J. Neuropsychopharmacol., 10 (2007), 383–399. https://doi.org/10.1017/S1461145706007103 doi: 10.1017/S1461145706007103
![]() |
[70] |
A. Imperato, M. C. Obinu, G. L. Gessa, Stimulation of both dopamine D1 and D2 receptors facilitates in vivo acetylcholine release in the hippocampus, Brain Res., 618 (1993), 341–345. https://doi.org/10.1016/0006-8993(93)91288-4 doi: 10.1016/0006-8993(93)91288-4
![]() |
[71] |
A. Martorana, F. Mori, Z. Esposito, H. Kusayanagi, F. Monteleone, C. Codecà, et al., Dopamine modulates cholinergic cortical excitability in Alzheimer's disease patients, Neuropsychopharmacology, 34 (2009), 2323–2328. https://doi.org/10.1038/npp.2009.60 doi: 10.1038/npp.2009.60
![]() |
[72] |
M. S. Lidow, P. S. Goldman-Rakic, D. W. Gallager, P. Rakic, Distribution of dopaminergic receptors in the primate cerebral cortex: Quantitative autoradiographic analysis using 3H.raclopride, 3H.spiperone and 3H.SCH23390, Neuroscience, 40 (1991), 657–671. https://doi.org/10.1016/0306-4522(91)90003-7 doi: 10.1016/0306-4522(91)90003-7
![]() |
[73] |
A. Mueller, R. M. Krock, S. Shepard, T. Moore, Dopamine receptor expression among local and visual cortex-projecting frontal eye field neurons, Cereb. Cortex, 30 (2020), 148–164. https://doi.org/10.1093/cercor/bhz078 doi: 10.1093/cercor/bhz078
![]() |
[74] |
K. Zilles, N. Palomero-Gallagher, Multiple transmitter receptors in regions and layers of the human cerebral cortex, Front. Neuroanat., 11 (2017), 1–26. https://doi.org/10.3389/fnana.2017.00078 doi: 10.3389/fnana.2017.00078
![]() |
[75] |
J. McLean, L. A. Palmer, Contribution of linear spatiotemporal receptive field structure to velocity selectivity of simple cells in area 17 of cat, Vision Res., 29 (1989), 675–679. https://doi.org/10.1016/0042-6989(89)90029-1 doi: 10.1016/0042-6989(89)90029-1
![]() |
[76] |
A. S. Pawar, S. Gepshtein, S. Savel'ev, T. D. Albright, Mechanisms of spatiotemporal selectivity in cortical area MT, Neuron, 101 (2019), 514–527. https://doi.org/10.1016/j.neuron.2018.12.002 doi: 10.1016/j.neuron.2018.12.002
![]() |
[77] |
N. J. Priebe, C. R. Cassanello, S. G. Lisberger, The neural representation of speed in macaque area MT/V5, J. Neurosci., 23 (2003), 5650–5661. https://doi.org/10.1523/jneurosci.23-13-05650.2003 doi: 10.1523/jneurosci.23-13-05650.2003
![]() |
[78] |
D. Giaschi, A. Zwicker, S. A. Young, B. Bjornson, The role of cortical area V5/MT+ in speed-tuned directional anisotropies in global motion perception, Vision Res., 47 (2007), 887–898. https://doi.org/10.1016/j.visres.2006.12.017 doi: 10.1016/j.visres.2006.12.017
![]() |
[79] |
J. A. Perrone, A. Thiele, A model of speed tuning in MT neurons, Vision Res., 42 (2002), 1035–1051. https://doi.org/10.1016/S0042-6989(02)00029-9 doi: 10.1016/S0042-6989(02)00029-9
![]() |
[80] |
D. C. Penn, K. J. Holyoak, D. J. Povinelli, Darwin's mistake: Explaining the discontinuity between human and nonhuman minds, Behav. Brain Sci., 31 (2008), 109–178. https://doi.org/10.1017/S0140525X08003543 doi: 10.1017/S0140525X08003543
![]() |
[81] |
M. Stuart-Fox, The origins of causal cognition in early hominins, Biol. Philos., 30 (2015), 247–266. https://doi.org/10.1007/s10539-014-9462-y doi: 10.1007/s10539-014-9462-y
![]() |
[82] |
J. A. Perrone, A. Thiele, Speed skills: measuring the visual speed analyzing properties of primate MT neurons, Nat. Neurosci., 4 (2001), 526–532. https://doi.org/10.1038/87480 doi: 10.1038/87480
![]() |
[83] |
G. Riddoch, Dissociation of visual perceptions due to occipital injuries, with especial reference to appreciation of movement, Brain, 40 (1917), 15–57. https://doi.org/10.1093/brain/40.1.15 doi: 10.1093/brain/40.1.15
![]() |
[84] |
S. Zeki, D.H. Ffytche, The Riddoch syndrome: Insights into the neurobiology of conscious vision, Brain, 121 (1998), 25–45. https://doi.org/10.1093/brain/121.1.25 doi: 10.1093/brain/121.1.25
![]() |
[85] |
T. Amemiya, B. Beck, V. Walsh, H. Gomi, P. Haggard, Visual area V5/hMT+ contributes to perception of tactile motion direction: A TMS study, Sci. Rep., 7 (2017), 1–7. https://doi.org/10.1038/srep40937 doi: 10.1038/srep40937
![]() |
[86] |
K. Krug, A common neuronal code for perceptual processes in visual cortex? Comparing choice and attentional correlates in V5/MT, Philos. Trans. R. Soc. B Biol. Sci., 359 (2004), 929–941. https://doi.org/10.1098/rstb.2003.1415 doi: 10.1098/rstb.2003.1415
![]() |
[87] |
C. Poirier, O. Collignon, A. G. DeVolder, L. Renier, A. Vanlierde, D. Tranduy, et al., Specific activation of the V5 brain area by auditory motion processing: An fMRI study, Cogn. Brain Res., 25 (2005), 650–658. https://doi.org/10.1016/j.cogbrainres.2005.08.015 doi: 10.1016/j.cogbrainres.2005.08.015
![]() |
[88] |
S. Zeki, Area V5—a microcosm of the visual brain, Front. Integr. Neurosci., 9 (2015), 1–18. https://doi.org/10.3389/fnint.2015.00021 doi: 10.3389/fnint.2015.00021
![]() |
[89] |
J. Kim, D. Norton, R. McBain, D. Ongur, Y. Chen, Deficient biological motion perception in schizophrenia: Results from a motion noise paradigm, Front. Psychol., 4 (2013), 391. https://doi.org/10.3389/fpsyg.2013.00391 doi: 10.3389/fpsyg.2013.00391
![]() |
[90] |
Y. Chen, Abnormal visual motion processing in schizophrenia: A review of research progress, Schizophr. Bull., 37 (2011), 709–715. https://doi.org/10.1093/schbul/sbr020 doi: 10.1093/schbul/sbr020
![]() |
[91] |
J. D. Golomb, J. R. B. McDavitt, B. M. Ruf, J. I. Chen, A. Saricicek, K. H. Maloney, et al., Enhanced visual motion perception in major depressive disorder, J. Neurosci., 29 (2009), 9072–9077. https://doi.org/10.1523/JNEUROSCI.1003-09.2009 doi: 10.1523/JNEUROSCI.1003-09.2009
![]() |
[92] |
L. Richard, D. Charbonneau, An introduction to E-Prime, Tutor. Quant. Methods Psychol., 5 (2009), 68–76. https://doi.org/10.20982/tqmp.05.2.p068 doi: 10.20982/tqmp.05.2.p068
![]() |
[93] |
J. W. Peirce, PsychoPy-Psychophysics software in Python, J. Neurosci. Methods, 162 (2007), 8–13. https://doi.org/10.1016/j.jneumeth.2006.11.017 doi: 10.1016/j.jneumeth.2006.11.017
![]() |
[94] |
J. Ceccarini, H. Liu, K. Van Laere, E. D. Morris, C. Y. Sander, Methods for quantifying neurotransmitter dynamics in the living brain with PET imaging, Front. Physiol., 11 (2020), 792. https://doi.org/10.3389/fphys.2020.00792 doi: 10.3389/fphys.2020.00792
![]() |
[95] |
E. J. Novotny, R. K. Fulbright, P. L. Pearl, K. M. Gibson, D. L. Rothman, Magnetic resonance spectroscopy of neurotransmitters in human brain, Ann. Neurol., 54 (2003), S25–S31. https://doi.org/10.1002/ana.10697 doi: 10.1002/ana.10697
![]() |
[96] |
A. Routier, N. Burgos, M. Díaz, M. Bacci, S. Bottani, O. El-Rifai, et al., Clinica: An open-source software platform for reproducible clinical neuroscience studies, Front. Neuroinform., 15 (2021), 39. https://doi.org/10.3389/fninf.2021.689675 doi: 10.3389/fninf.2021.689675
![]() |
[97] |
W. T. Clarke, C. J. Stagg, S. Jbabdi, FSL-MRS: An end-to-end spectroscopy analysis package, Magn. Reson. Med., 85 (2021), 2950–2964. https://doi.org/10.1002/mrm.28630 doi: 10.1002/mrm.28630
![]() |
![]() |
![]() |
1. | Anmol Mahani, Rudranath Zadu, Artificial intelligence in oncology: Revolutionizing cancer care, 2024, 10, 2581-4796, 311, 10.18231/j.aprd.2024.059 | |
2. | Abhay Tale, Aditya Barhate, Prateek Verma, Palash Gourshettiwar, 2024, A Review of Machine Learning Algorithms for Detection of Breast Cancer, 979-8-3315-2963-5, 1148, 10.1109/ICUIS64676.2024.10866542 | |
3. | Moloudosadat Alavinejad, Maryam Shirzad, Mohammad Javad Javid-Naderi, Abbas Rahdar, Sonia Fathi-karkan, Sadanand Pandey, Smart nanomedicines powered by artificial intelligence: a breakthrough in lung cancer diagnosis and treatment, 2025, 42, 1559-131X, 10.1007/s12032-025-02680-x | |
4. | Ayush Gadkari, Mansvi Daigavhane, Anushka Arghode, Prateek Verma, Swapnil Gundewar, 2025, Chapter 31, 978-981-97-9528-4, 359, 10.1007/978-981-97-9529-1_31 |
Serial number | Year of approval | Name of the device | Description of the device and its role |
1. | 2015 | ClearRead CT (Riverain Technologies LLC.) | Providing support for reviewing chest multi-slice CT scans and identifying potential nodules that require a radiologist's attention. |
2. | Transpara (ScreenPoint Medical BV) | Aiding physicians in the interpretation of screening mammograms, aiding in the identification of suspicious areas indicative of breast cancer. | |
3. | 2016 | SmartTarget (SmartTarget Ltd.) | Participating in image-guided intervention and diagnostic procedures related to the prostate gland. |
4. | LungQ (Thirona Corp.) | Aiding in diagnosing and documenting abnormalities in pulmonary tissue images, specifically extracted from CT thoracic datasets. | |
5. | 2017 | AmCAD-US (AmCad BioMed Corporation) | A software designed to visualize and quantify ultrasound image data along with corresponding backscattered signals. |
6. | QuantX (Quantitative Insights) | An AI-enhanced diagnostic system designed to assist in achieving accurate diagnoses of breast cancer. | |
7. | Veye Chest (Aidence BV) | Assistance in the detection of pulmonary nodules from CT scans. | |
8. | 2018 | Arterys Oncology DL (Arterys) | An AI-powered, cloud-based medical imaging software designed to automatically measure and track lesions and nodules in both MRI and CT scans. |
9. | QVCAD (QView Medical Inc.) | An assistance tool aimed at detecting mammography-occult lesions in areas that were not initially identified as having suspicious findings. | |
10. | HealthMammo (Zebra Medical Vision Inc.) | Processing and analyzing mammograms to identify suspected lesions indicative of breast cancer. | |
11. | Arterys Oncology DL (Arterys Inc.) | Assisting in the oncological workflow by aiding users in confirming the presence or absence of lesions. This application supports anatomical datasets such as CT or MRI scans. | |
12. | AmCAD-UT (AmCad BioMed Corporation) | Providing support in the analysis of thyroid ultrasound images. | |
13. | Mia -Mammography Intelligent Assessment (KheironMedical Technologies Ltd.) | Offering assistance in the detection of breast cancer through the analysis of mammograms. | |
14. | Arterys MICA (Arterys) | A platform powered by AI for the analysis of medical images, including MRI and CT scans. | |
15. | SubtlePET (Subtle Medical) | An AI-driven technology that enables medical centers to provide quicker and safer patient scanning experiences, simultaneously improving exam throughput and provider profitability. | |
16. | 2019 | carriage (CureMetrix) | A software utilizing AI for the triage of mammography cases. |
17. | Deep Learning Image Reconstruction (GE Medical Systems) | ||
18. | Auto Lung Nodule Detection (Samsung Electronics Co. Ltd. (parent company: Samsung Group) | Breast cancer detection for diagnostic support from mammograms. | |
19. | JPC-01K (JLK Inspection Inc.) | Offering diagnostic support through the detection of prostate cancer using MRI. | |
20. | syngo.Breast Care (Siemens Healthcare GmbH (parent company: Siemens AG)) | Providing interpretation and reporting services to offer diagnostic support using mammograms. | |
21. | Aquilion ONE (TSX-305A/6) V8.9 with AiCE (Canon MedicalSystems Corporation) | A device capable of capturing and displaying cross-sectional volumes of the entire body, including the head, with the unique ability to image whole organs within a single rotation. | |
22. | ProFound AI for Digital Breast Tomosynthesis (iCAD Inc.) | A software device for computer-assisted detection and diagnosis (CAD) designed to aid in the interpretation of digital breast tomosynthesis (DBT) exams. | |
23. | RayCare 2.3 (RaySearch Laboratories) | An oncology information system is utilized to facilitate workflows, scheduling, and the management of clinical information for oncology care and post-treatment monitoring. | |
24. | Breast-SlimView (Hera-MI SAS) | Providing diagnostic support by detecting breast cancer through the analysis of mammograms. | |
25. | Vara (Merantix Healthcare GmbH) | Assistance in breast cancer screening and triage through the analysis of mammograms. | |
26. | ProFound AI Software V2.1 (iCAD) | A Computer aided design (CAD) software device was developed to be used simultaneously by interpreting physicians during the assessment of DBT images. | |
27. | 2019 | Transpara (ScreenPoint Medical) | A device designed to assist physicians concurrently while interpreting screening mammograms from compatible Full Field Digital Mammography (FFDM) systems. Its purpose is to help identify regions that appear suspicious for breast cancer and evaluate the likelihood of malignancy. |
28. | QyScore software (Qynapse SAS) | Automating the process of labeling, visualizing, and quantifying the volumes of segmentable brain structures and lesions from MRI images. | |
29. | 2020 | JBD-01K (JLK Inspection Inc.) | Providing diagnostic support through the detection of breast cancer using mammograms. |
30. | InferRead CT Lung (Beijing Infervision Technology Co. Ltd.) | A tool designed for lung cancer screening and management through the analysis of CT scans. | |
31. | b-box (X-rays GmbH) | Evaluating the quality of mammography images and determining breast density using mammograms. | |
32. | densitasAI (Densitas Inc.) | Offering support for the assessment of breast density using mammograms. | |
33. | Broncholab (Fluidda Inc) | Aiding in diagnosing and documenting abnormalities in pulmonary tissue images obtained from CT thoracic datasets. | |
34. | Syngo.CT Lung CAD (Siemens Medical Solutions Inc. (parent company: Siemens AG)) | Aiding in the detection of solid pulmonary nodules while reviewing multi-detector computed tomography (CT) exams of the chest. | |
35. | Genius AI Detection (Hologic, Inc.) | A software device designed to detect potential abnormalities in breast tomosynthesis images. | |
36. | MammoScreen (Therapixel SA) | Assisting in the identification of findings on screening FFDM acquired with compatible mammography systems and evaluating the level of suspicion associated with them. | |
37. | Visage Breast Density (Visage Imaging) | The software application is designed to be utilized alongside compatible full-field digital mammography systems, supporting radiologists in evaluating breast tissue composition. | |
38. | Imagio Breast Imaging System (Seno Medical Instruments, Inc.) | Enables an enhanced classification of breast masses in comparison to using ultrasound alone, incorporating AI-based software | |
39. | 2021 | Vivo Software Application (DiA Imaging Analysis Ltd.) | It provides an objective automated AI-based ejection fraction analysis. |
Vivo Software Application (DiA Imaging Analysis Ltd.) | It provides an objective automated AI-based ejection fraction analysis. | ||
40. | Vantage Galan 3T, MRT-3020, V6.0 with AiCE Reconstruction Processing Unit for MR Canon Medical Systems Corporation | Advanced intelligent Clear IQ Engine (AiCE) MRI deep learning reconstruction has been used for the construction of MR images. | |
41. | GI Genius Cosmo Artificial Intelligence - AI Ltd | It helps physicians detect colorectal polyps of various sizes, shapes, and morphologies. | |
42. | Chest-CAD Imagen Technologies, Inc | The average clinician missed the spot or showed a misinterpretation rate of 47%. This helps to identify the spot by using AI. | |
43. | 2022 | Precise Image (Philips Medical Systems Nederland, B.V.) | AI-powered reconstruction algorithm designed for the low radiation dose, which helps to improve the image appearance that closely resembles filtered back projection (FBP) at a higher dose. |
44. | Contour ProtegeAI MIM Software Inc. | It uses the machine learning algorithms for processing of CT images. | |
45. | Deep Learning Image Reconstruction GE Medical Systems | It uses the deep learning imaging reconstruction algorithm trained to eliminate image noise by leveraging MRI raw data. | |
46. | Ingenia, Ingenia CX, Ingenia Elition, Ingenia Ambition, MR 5300 and MR 7700 MR Systems Philips Medical Systems Nederland B.V. | It enables physicians to obtain cross-sectional and spectroscopic images. | |
47. | Brainomix 360 Triage ICH Brainomix Limited | It is a notification tool that provides real-time alerts to clinicians. |
Imaging method | Application | Advantages | Disadvantages | References |
X-ray | X-rays are commonly used to detect tumors, bone abnormalities, and other abnormalities in the body. They are often used in combination with other imaging techniques to provide a more comprehensive view. | High sensitivity and specificity for detecting metabolic changes in cancer cells. | Limited spatial resolution compared to other imaging modalities. | Malik et al., 2023 and Miwa et al., 2017 [31],[32] |
Provides quantitative data on tracer uptake, aiding in treatment response assessment. | Requires the use of a cyclotron for on-site production of short-lived radionuclides. | |||
Can be combined with computed tomography (PET/CT) for precise anatomical localization. | Radiation exposure to patients and healthcare professionals due to the use of radionuclides. | |||
Widely used for staging, restaging, and monitoring therapy response. | ||||
MRI | MRI uses a strong magnetic field and radio waves to produce images of the body's internal structure. MRI is valuable for imaging soft tissues and can provide information about the extent and location of tumors. | Excellent soft tissue contrast, allowing for detailed anatomical visualization. | Relatively long imaging times can be challenging for some patients. | Aisen et al., 1986 and Siegel, 2001 [33],[34] |
No ionizing radiation, making it safer for repeated imaging. | Expensive equipment and higher operational costs. | |||
Can provide functional information through techniques like diffusion-weighted imaging (DWI) and dynamic contrast-enhanced MRI (DCE-MRI). | Limited availability in some regions. | |||
Suitable for imaging various parts of the body. | ||||
Ultrasound | Ultrasound uses high-frequency sound waves to create images of internal structures. It is commonly used to assess the size and characteristics of tumors, guide biopsies, and monitor treatment responses. | Real-time imaging with no ionizing radiation exposure. | Limited penetration through bone and air-filled structures. | Fischerova et al., 2011 [35] |
Non-invasive and widely available. | Limited ability to visualize soft tissues in deep body regions. | |||
Relatively low cost compared to other imaging modalities. | Operator-dependent and potential for variability in image quality. | |||
Suitable for guiding biopsies and minimally invasive procedures. | ||||
PET | PET scans involve injecting a small amount of radioactive material into the body, which accumulates in areas with high metabolic activity (such as cancer cells). The PET scanner detects the radiation emitted by the material and produces images that highlight these active areas. A PET scan is often combined with a CT scan (PET/CT) for more accurate localization. | High sensitivity and specificity for detecting metabolic changes in cancer cells. | Limited spatial resolution compared to other imaging modalities. | Czernin et al., 2002 [36] |
Provides quantitative data on tracer uptake, aiding in treatment response assessment. | Requires the use of a cyclotron for on-site production of short-lived radionuclides. | |||
Can be combined with computed tomography (PET/CT) for precise anatomical localization. | Radiation exposure to patients and healthcare professionals due to the use of radionuclides. | |||
Widely used for staging, restaging, and monitoring therapy response. | ||||
SPECT | SPECT is a nuclear medicine imaging technique that provides three-dimensional images of the distribution of radioactive substances in the patient's bloodstream. The radiotracer emits gamma rays, which are detected by a gamma camera as the patient is positioned within the SPECT scanner. SPECT is particularly useful for imaging internal organs and tissues, and it has applications in cancer detection and staging. | Useful for functional imaging of specific organs and tissues. | Lower spatial resolution compared to PET or CT. | Keown et al., 2020 [37] |
Provides valuable information on perfusion, blood flow, and receptor expression. | Longer imaging acquisition times compared to PET. | |||
Can be used with a variety of radiopharmaceuticals for different applications. | Limited sensitivity in detecting low-level tracer uptake. | |||
Sentinel lymph node mapping | In cancer staging, sentinel lymph nodes (the first lymph nodes to receive drainage from a tumor) are crucial indicators of cancer spread. Radioactive tracers are injected near the tumor, and nuclear imaging helps identify and biopsy these nodes, aiding in accurate staging. | Minimally invasive technique for identifying sentinel lymph nodes. | May result in false negatives due to the possibility of missing metastatic nodes. | Manca et al., 2016 and Petousis et al., 2022 [38],[39] |
Helps avoid unnecessary lymph node dissection in certain cancers. | In certain cases, sentinel nodes may not accurately represent the overall lymph node status. | |||
Accurate staging of cancer spread through lymphatic pathways. | ||||
Mammography | Mammography is a widely used technique for breast cancer detection and screening. It involves X-ray imaging of the breast tissue to identify abnormalities such as masses or microcalcifications that may indicate the presence of cancer. | Effective for detecting breast cancer at early stages, especially in older women. | Limited sensitivity in dense breast tissue, especially in younger women. | Pisano et al., 2006 [40],[41] |
Wide availability and established screening programs. | Potential discomfort during compression for some patients. | |||
Relatively low radiation exposure. | May result in false positives that require additional testing and anxiety. | |||
Can detect small calcifications associated with early breast cancers. | ||||
Thermography | Thermography detects changes in skin temperature, and it has been explored as an adjunctive tool for breast cancer screening. Increased blood flow and metabolic activity in tumors can cause temperature differences, which thermography aims to visualize. | Non-invasive and no ionizing radiation exposure. | Limited sensitivity and specificity compared to other imaging modalities. | Arora et al., 2008 [42] |
Can detect temperature changes associated with increased blood flow in some tumors. | Variability in results due to external factors like room temperature. | |||
Can be used as an adjunctive tool for breast cancer detection. | Not widely accepted as a primary screening tool due to its limitations. | |||
Microscopy | Microscopy, including light microscopy and electron microscopy, is used for detailed examination of tissue samples obtained through biopsies. It provides insights into cellular and tissue morphology, helping pathologists identify cancerous changes and characterize tumors. | Provides high-resolution imaging of tissue samples at the cellular and subcellular level. | Invasive technique requiring tissue samples (biopsies). | Kumar et al., 2014 and Mills et al., 2006 [43],[44] |
Can reveal detailed morphological and histological information. | Limited to ex vivo analysis and may not capture dynamic processes. | |||
Important for diagnostic confirmation and understanding tumor characteristics. | Labor-intensive and time-consuming for comprehensive analysis. | |||
Radionuclide bone scans | Bone scans using radiolabeled bisphosphonates or phosphonates help detect metastatic bone disease. They can identify areas of increased bone turnover, indicating the presence of cancer metastases. | Sensitive for detecting bone metastases and assessing overall skeletal health. | Limited anatomical detail compared to CT or MRI. | Coleman et al., 2001 [45] |
Allows visualization of multiple skeletal sites in a single scan. | Cannot distinguish between active cancer lesions and non-malignant conditions like arthritis. | |||
Can provide early detection of bone metastases before they become symptomatic. | High sensitivity can lead to false-positive findings. | |||
Thyroid cancer imaging | Radioactive iodine (iodine-131) is used to diagnose and treat thyroid cancer. Thyroid cancer cells take up iodine, allowing for imaging and targeted treatment. | Effective for imaging thyroid tissue and thyroid cancer metastases. | Limited application to thyroid and thyroid-related conditions. | Jin et al., 2018 and Brose et al., 2012 [46],[47] |
Allows for targeted therapy using radioactive iodine-131. | The long half-life of iodine-131 requires special precautions for patient and public safety. | |||
Not suitable for cancers that do not take up iodine, such as some types of thyroid cancer. | ||||
Peptide receptor radionuclide therapy (PRRT) | PRRT involves targeting cancer cells that overexpress specific receptors with radiolabeled peptides. It is used for neuroendocrine tumors and some types of prostate cancer. | Targets specific receptors on cancer cells, reducing damage to normal tissues. | Limited to tumors that express the specific receptors targeted by the radiolabeled peptides. | Strosberg et al., 2017 [48] |
Can provide palliative treatment for certain types of neuroendocrine tumors. | Long imaging and treatment times due to the radioactive decay of the radionuclides. | |||
Offers a personalized approach to cancer treatment. | Potential side effects related to radiation exposure and peptide therapy. |
Tool/modality | Description | Limit of detection | Applications | Reference |
Histopathology | Examination of tissues under a microscope to identify disease. | Single cells | Tissue analysis for various cancers. | [50] |
Cytology | Study of individual cells to detect abnormalities. | Single cells | Screening for cervical and other cancers. | [51] |
Immunohistochemistry (IHC) | Use of antibodies to detect specific antigens in cells of a tissue section. | Protein expression levels | Determining cancer subtypes and prognosis. | [50] |
Molecular pathology | Study of molecules within organs, tissues, or bodily fluids. | Varies by assay | Genetic mutations, gene expression. | [52] |
Genetic testing | Analysis of DNA, RNA, chromosomes, proteins, and certain metabolites. | Single nucleotide changes | Hereditary cancer syndromes, targeted therapy decisions. | [52] |
Liquid biopsy | A non-invasive test that detects cancer cells or their DNA in blood. | Circulating tumor DNA | Monitoring, early detection of cancer. | [51] |
Tumor marker tests | Blood tests can help to identify the presence of certain types of cancer. | Varies by marker | Prognosis, monitoring treatment response. | [52] |
Serial number | Year of approval | Name of the device | Description of the device and its role |
1. | 2015 | ClearRead CT (Riverain Technologies LLC.) | Providing support for reviewing chest multi-slice CT scans and identifying potential nodules that require a radiologist's attention. |
2. | Transpara (ScreenPoint Medical BV) | Aiding physicians in the interpretation of screening mammograms, aiding in the identification of suspicious areas indicative of breast cancer. | |
3. | 2016 | SmartTarget (SmartTarget Ltd.) | Participating in image-guided intervention and diagnostic procedures related to the prostate gland. |
4. | LungQ (Thirona Corp.) | Aiding in diagnosing and documenting abnormalities in pulmonary tissue images, specifically extracted from CT thoracic datasets. | |
5. | 2017 | AmCAD-US (AmCad BioMed Corporation) | A software designed to visualize and quantify ultrasound image data along with corresponding backscattered signals. |
6. | QuantX (Quantitative Insights) | An AI-enhanced diagnostic system designed to assist in achieving accurate diagnoses of breast cancer. | |
7. | Veye Chest (Aidence BV) | Assistance in the detection of pulmonary nodules from CT scans. | |
8. | 2018 | Arterys Oncology DL (Arterys) | An AI-powered, cloud-based medical imaging software designed to automatically measure and track lesions and nodules in both MRI and CT scans. |
9. | QVCAD (QView Medical Inc.) | An assistance tool aimed at detecting mammography-occult lesions in areas that were not initially identified as having suspicious findings. | |
10. | HealthMammo (Zebra Medical Vision Inc.) | Processing and analyzing mammograms to identify suspected lesions indicative of breast cancer. | |
11. | Arterys Oncology DL (Arterys Inc.) | Assisting in the oncological workflow by aiding users in confirming the presence or absence of lesions. This application supports anatomical datasets such as CT or MRI scans. | |
12. | AmCAD-UT (AmCad BioMed Corporation) | Providing support in the analysis of thyroid ultrasound images. | |
13. | Mia -Mammography Intelligent Assessment (KheironMedical Technologies Ltd.) | Offering assistance in the detection of breast cancer through the analysis of mammograms. | |
14. | Arterys MICA (Arterys) | A platform powered by AI for the analysis of medical images, including MRI and CT scans. | |
15. | SubtlePET (Subtle Medical) | An AI-driven technology that enables medical centers to provide quicker and safer patient scanning experiences, simultaneously improving exam throughput and provider profitability. | |
16. | 2019 | carriage (CureMetrix) | A software utilizing AI for the triage of mammography cases. |
17. | Deep Learning Image Reconstruction (GE Medical Systems) | ||
18. | Auto Lung Nodule Detection (Samsung Electronics Co. Ltd. (parent company: Samsung Group) | Breast cancer detection for diagnostic support from mammograms. | |
19. | JPC-01K (JLK Inspection Inc.) | Offering diagnostic support through the detection of prostate cancer using MRI. | |
20. | syngo.Breast Care (Siemens Healthcare GmbH (parent company: Siemens AG)) | Providing interpretation and reporting services to offer diagnostic support using mammograms. | |
21. | Aquilion ONE (TSX-305A/6) V8.9 with AiCE (Canon MedicalSystems Corporation) | A device capable of capturing and displaying cross-sectional volumes of the entire body, including the head, with the unique ability to image whole organs within a single rotation. | |
22. | ProFound AI for Digital Breast Tomosynthesis (iCAD Inc.) | A software device for computer-assisted detection and diagnosis (CAD) designed to aid in the interpretation of digital breast tomosynthesis (DBT) exams. | |
23. | RayCare 2.3 (RaySearch Laboratories) | An oncology information system is utilized to facilitate workflows, scheduling, and the management of clinical information for oncology care and post-treatment monitoring. | |
24. | Breast-SlimView (Hera-MI SAS) | Providing diagnostic support by detecting breast cancer through the analysis of mammograms. | |
25. | Vara (Merantix Healthcare GmbH) | Assistance in breast cancer screening and triage through the analysis of mammograms. | |
26. | ProFound AI Software V2.1 (iCAD) | A Computer aided design (CAD) software device was developed to be used simultaneously by interpreting physicians during the assessment of DBT images. | |
27. | 2019 | Transpara (ScreenPoint Medical) | A device designed to assist physicians concurrently while interpreting screening mammograms from compatible Full Field Digital Mammography (FFDM) systems. Its purpose is to help identify regions that appear suspicious for breast cancer and evaluate the likelihood of malignancy. |
28. | QyScore software (Qynapse SAS) | Automating the process of labeling, visualizing, and quantifying the volumes of segmentable brain structures and lesions from MRI images. | |
29. | 2020 | JBD-01K (JLK Inspection Inc.) | Providing diagnostic support through the detection of breast cancer using mammograms. |
30. | InferRead CT Lung (Beijing Infervision Technology Co. Ltd.) | A tool designed for lung cancer screening and management through the analysis of CT scans. | |
31. | b-box (X-rays GmbH) | Evaluating the quality of mammography images and determining breast density using mammograms. | |
32. | densitasAI (Densitas Inc.) | Offering support for the assessment of breast density using mammograms. | |
33. | Broncholab (Fluidda Inc) | Aiding in diagnosing and documenting abnormalities in pulmonary tissue images obtained from CT thoracic datasets. | |
34. | Syngo.CT Lung CAD (Siemens Medical Solutions Inc. (parent company: Siemens AG)) | Aiding in the detection of solid pulmonary nodules while reviewing multi-detector computed tomography (CT) exams of the chest. | |
35. | Genius AI Detection (Hologic, Inc.) | A software device designed to detect potential abnormalities in breast tomosynthesis images. | |
36. | MammoScreen (Therapixel SA) | Assisting in the identification of findings on screening FFDM acquired with compatible mammography systems and evaluating the level of suspicion associated with them. | |
37. | Visage Breast Density (Visage Imaging) | The software application is designed to be utilized alongside compatible full-field digital mammography systems, supporting radiologists in evaluating breast tissue composition. | |
38. | Imagio Breast Imaging System (Seno Medical Instruments, Inc.) | Enables an enhanced classification of breast masses in comparison to using ultrasound alone, incorporating AI-based software | |
39. | 2021 | Vivo Software Application (DiA Imaging Analysis Ltd.) | It provides an objective automated AI-based ejection fraction analysis. |
Vivo Software Application (DiA Imaging Analysis Ltd.) | It provides an objective automated AI-based ejection fraction analysis. | ||
40. | Vantage Galan 3T, MRT-3020, V6.0 with AiCE Reconstruction Processing Unit for MR Canon Medical Systems Corporation | Advanced intelligent Clear IQ Engine (AiCE) MRI deep learning reconstruction has been used for the construction of MR images. | |
41. | GI Genius Cosmo Artificial Intelligence - AI Ltd | It helps physicians detect colorectal polyps of various sizes, shapes, and morphologies. | |
42. | Chest-CAD Imagen Technologies, Inc | The average clinician missed the spot or showed a misinterpretation rate of 47%. This helps to identify the spot by using AI. | |
43. | 2022 | Precise Image (Philips Medical Systems Nederland, B.V.) | AI-powered reconstruction algorithm designed for the low radiation dose, which helps to improve the image appearance that closely resembles filtered back projection (FBP) at a higher dose. |
44. | Contour ProtegeAI MIM Software Inc. | It uses the machine learning algorithms for processing of CT images. | |
45. | Deep Learning Image Reconstruction GE Medical Systems | It uses the deep learning imaging reconstruction algorithm trained to eliminate image noise by leveraging MRI raw data. | |
46. | Ingenia, Ingenia CX, Ingenia Elition, Ingenia Ambition, MR 5300 and MR 7700 MR Systems Philips Medical Systems Nederland B.V. | It enables physicians to obtain cross-sectional and spectroscopic images. | |
47. | Brainomix 360 Triage ICH Brainomix Limited | It is a notification tool that provides real-time alerts to clinicians. |
Imaging method | Application | Advantages | Disadvantages | References |
X-ray | X-rays are commonly used to detect tumors, bone abnormalities, and other abnormalities in the body. They are often used in combination with other imaging techniques to provide a more comprehensive view. | High sensitivity and specificity for detecting metabolic changes in cancer cells. | Limited spatial resolution compared to other imaging modalities. | Malik et al., 2023 and Miwa et al., 2017 [31],[32] |
Provides quantitative data on tracer uptake, aiding in treatment response assessment. | Requires the use of a cyclotron for on-site production of short-lived radionuclides. | |||
Can be combined with computed tomography (PET/CT) for precise anatomical localization. | Radiation exposure to patients and healthcare professionals due to the use of radionuclides. | |||
Widely used for staging, restaging, and monitoring therapy response. | ||||
MRI | MRI uses a strong magnetic field and radio waves to produce images of the body's internal structure. MRI is valuable for imaging soft tissues and can provide information about the extent and location of tumors. | Excellent soft tissue contrast, allowing for detailed anatomical visualization. | Relatively long imaging times can be challenging for some patients. | Aisen et al., 1986 and Siegel, 2001 [33],[34] |
No ionizing radiation, making it safer for repeated imaging. | Expensive equipment and higher operational costs. | |||
Can provide functional information through techniques like diffusion-weighted imaging (DWI) and dynamic contrast-enhanced MRI (DCE-MRI). | Limited availability in some regions. | |||
Suitable for imaging various parts of the body. | ||||
Ultrasound | Ultrasound uses high-frequency sound waves to create images of internal structures. It is commonly used to assess the size and characteristics of tumors, guide biopsies, and monitor treatment responses. | Real-time imaging with no ionizing radiation exposure. | Limited penetration through bone and air-filled structures. | Fischerova et al., 2011 [35] |
Non-invasive and widely available. | Limited ability to visualize soft tissues in deep body regions. | |||
Relatively low cost compared to other imaging modalities. | Operator-dependent and potential for variability in image quality. | |||
Suitable for guiding biopsies and minimally invasive procedures. | ||||
PET | PET scans involve injecting a small amount of radioactive material into the body, which accumulates in areas with high metabolic activity (such as cancer cells). The PET scanner detects the radiation emitted by the material and produces images that highlight these active areas. A PET scan is often combined with a CT scan (PET/CT) for more accurate localization. | High sensitivity and specificity for detecting metabolic changes in cancer cells. | Limited spatial resolution compared to other imaging modalities. | Czernin et al., 2002 [36] |
Provides quantitative data on tracer uptake, aiding in treatment response assessment. | Requires the use of a cyclotron for on-site production of short-lived radionuclides. | |||
Can be combined with computed tomography (PET/CT) for precise anatomical localization. | Radiation exposure to patients and healthcare professionals due to the use of radionuclides. | |||
Widely used for staging, restaging, and monitoring therapy response. | ||||
SPECT | SPECT is a nuclear medicine imaging technique that provides three-dimensional images of the distribution of radioactive substances in the patient's bloodstream. The radiotracer emits gamma rays, which are detected by a gamma camera as the patient is positioned within the SPECT scanner. SPECT is particularly useful for imaging internal organs and tissues, and it has applications in cancer detection and staging. | Useful for functional imaging of specific organs and tissues. | Lower spatial resolution compared to PET or CT. | Keown et al., 2020 [37] |
Provides valuable information on perfusion, blood flow, and receptor expression. | Longer imaging acquisition times compared to PET. | |||
Can be used with a variety of radiopharmaceuticals for different applications. | Limited sensitivity in detecting low-level tracer uptake. | |||
Sentinel lymph node mapping | In cancer staging, sentinel lymph nodes (the first lymph nodes to receive drainage from a tumor) are crucial indicators of cancer spread. Radioactive tracers are injected near the tumor, and nuclear imaging helps identify and biopsy these nodes, aiding in accurate staging. | Minimally invasive technique for identifying sentinel lymph nodes. | May result in false negatives due to the possibility of missing metastatic nodes. | Manca et al., 2016 and Petousis et al., 2022 [38],[39] |
Helps avoid unnecessary lymph node dissection in certain cancers. | In certain cases, sentinel nodes may not accurately represent the overall lymph node status. | |||
Accurate staging of cancer spread through lymphatic pathways. | ||||
Mammography | Mammography is a widely used technique for breast cancer detection and screening. It involves X-ray imaging of the breast tissue to identify abnormalities such as masses or microcalcifications that may indicate the presence of cancer. | Effective for detecting breast cancer at early stages, especially in older women. | Limited sensitivity in dense breast tissue, especially in younger women. | Pisano et al., 2006 [40],[41] |
Wide availability and established screening programs. | Potential discomfort during compression for some patients. | |||
Relatively low radiation exposure. | May result in false positives that require additional testing and anxiety. | |||
Can detect small calcifications associated with early breast cancers. | ||||
Thermography | Thermography detects changes in skin temperature, and it has been explored as an adjunctive tool for breast cancer screening. Increased blood flow and metabolic activity in tumors can cause temperature differences, which thermography aims to visualize. | Non-invasive and no ionizing radiation exposure. | Limited sensitivity and specificity compared to other imaging modalities. | Arora et al., 2008 [42] |
Can detect temperature changes associated with increased blood flow in some tumors. | Variability in results due to external factors like room temperature. | |||
Can be used as an adjunctive tool for breast cancer detection. | Not widely accepted as a primary screening tool due to its limitations. | |||
Microscopy | Microscopy, including light microscopy and electron microscopy, is used for detailed examination of tissue samples obtained through biopsies. It provides insights into cellular and tissue morphology, helping pathologists identify cancerous changes and characterize tumors. | Provides high-resolution imaging of tissue samples at the cellular and subcellular level. | Invasive technique requiring tissue samples (biopsies). | Kumar et al., 2014 and Mills et al., 2006 [43],[44] |
Can reveal detailed morphological and histological information. | Limited to ex vivo analysis and may not capture dynamic processes. | |||
Important for diagnostic confirmation and understanding tumor characteristics. | Labor-intensive and time-consuming for comprehensive analysis. | |||
Radionuclide bone scans | Bone scans using radiolabeled bisphosphonates or phosphonates help detect metastatic bone disease. They can identify areas of increased bone turnover, indicating the presence of cancer metastases. | Sensitive for detecting bone metastases and assessing overall skeletal health. | Limited anatomical detail compared to CT or MRI. | Coleman et al., 2001 [45] |
Allows visualization of multiple skeletal sites in a single scan. | Cannot distinguish between active cancer lesions and non-malignant conditions like arthritis. | |||
Can provide early detection of bone metastases before they become symptomatic. | High sensitivity can lead to false-positive findings. | |||
Thyroid cancer imaging | Radioactive iodine (iodine-131) is used to diagnose and treat thyroid cancer. Thyroid cancer cells take up iodine, allowing for imaging and targeted treatment. | Effective for imaging thyroid tissue and thyroid cancer metastases. | Limited application to thyroid and thyroid-related conditions. | Jin et al., 2018 and Brose et al., 2012 [46],[47] |
Allows for targeted therapy using radioactive iodine-131. | The long half-life of iodine-131 requires special precautions for patient and public safety. | |||
Not suitable for cancers that do not take up iodine, such as some types of thyroid cancer. | ||||
Peptide receptor radionuclide therapy (PRRT) | PRRT involves targeting cancer cells that overexpress specific receptors with radiolabeled peptides. It is used for neuroendocrine tumors and some types of prostate cancer. | Targets specific receptors on cancer cells, reducing damage to normal tissues. | Limited to tumors that express the specific receptors targeted by the radiolabeled peptides. | Strosberg et al., 2017 [48] |
Can provide palliative treatment for certain types of neuroendocrine tumors. | Long imaging and treatment times due to the radioactive decay of the radionuclides. | |||
Offers a personalized approach to cancer treatment. | Potential side effects related to radiation exposure and peptide therapy. |
Tool/modality | Description | Limit of detection | Applications | Reference |
Histopathology | Examination of tissues under a microscope to identify disease. | Single cells | Tissue analysis for various cancers. | [50] |
Cytology | Study of individual cells to detect abnormalities. | Single cells | Screening for cervical and other cancers. | [51] |
Immunohistochemistry (IHC) | Use of antibodies to detect specific antigens in cells of a tissue section. | Protein expression levels | Determining cancer subtypes and prognosis. | [50] |
Molecular pathology | Study of molecules within organs, tissues, or bodily fluids. | Varies by assay | Genetic mutations, gene expression. | [52] |
Genetic testing | Analysis of DNA, RNA, chromosomes, proteins, and certain metabolites. | Single nucleotide changes | Hereditary cancer syndromes, targeted therapy decisions. | [52] |
Liquid biopsy | A non-invasive test that detects cancer cells or their DNA in blood. | Circulating tumor DNA | Monitoring, early detection of cancer. | [51] |
Tumor marker tests | Blood tests can help to identify the presence of certain types of cancer. | Varies by marker | Prognosis, monitoring treatment response. | [52] |