<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>ScholarWorks Community:</title>
    <link>https://scholarworks.korea.ac.kr/kumedicine/handle/2020.sw.kumedicine/504</link>
    <description />
    <pubDate>Sun, 05 Apr 2026 15:07:14 GMT</pubDate>
    <dc:date>2026-04-05T15:07:14Z</dc:date>
    <item>
      <title>AI-Driven quality assurance in mammography: Enhancing quality control efficiency through automated phantom image evaluation in South Korea</title>
      <link>https://scholarworks.korea.ac.kr/kumedicine/handle/2021.sw.kumedicine/78370</link>
      <description>Title: AI-Driven quality assurance in mammography: Enhancing quality control efficiency through automated phantom image evaluation in South Korea
Authors: Yun, Hoo; Noh, Sanghyun; Cho, Hyungwook; Ko, Eun-yong; Yang, Zepa; Woo, Ok-hee
Abstract: Purpose To develop and validate a deep learning-based model for automated evaluation of mammography phantom images, with the goal of improving inter-radiologist agreement and enhancing the efficiency of quality control within South Korea’s national accreditation system. Materials and methods A total of 5,917 mammography phantom images were collected from the Korea Institute for Accreditation of Medical Imaging (KIAMI). After preprocessing, 5,813 images (98.2%) met quality standards and were divided into training, test, and evaluation datasets. Each image included 16 artificial lesions (fibers, specks, masses) scored by certified radiologists. Images were preprocessed, standardized, and divided into 16 subimages. An EfficientNetV2_L-based model, selected for its balance of accuracy and computational efficiency, was used to predict both lesion existence and scoring adequacy (score of 0.0, 0.5, 1.0). Model performance was evaluated using accuracy, F1-score, area under the curve (AUC), and explainable AI techniques. Results The model achieved classification accuracy of 87.84%, 93.43%, and 86.63% for fibers (F1: 0.7292, 95% bootstrap CI: 0.711, 0.747), specks (F1: 0. 7702, 95% bootstrap CI: 0.750, 0.791), and masses (F1: 0.7594, 95% bootstrap CI: 0.736, 0.781), respectively. AUCs exceeded 0.97 for 0.0-score detection and above 0.94 for 0.5-score detection. Notably, the model demonstrated strong discriminative capability in 1.0-score detection across all lesion types. Model interpretation experiments confirmed adherence to guideline criteria: fiber scoring reflected the “longest visible segment” rule; speck detection showed score transitions at two and four visible points; and mass evaluation prioritized circularity but showed some size-related bias. Saliency maps confirmed alignment with guideline-defined lesion features while ignoring irrelevant artifacts. Conclusion The proposed deep learning model accurately assessed mammography phantom images according to guideline criteria and achieved expert-level performance. By automating the evaluation process, the model can improve scoring consistency and significantly enhance the efficiency and scalability of quality control workflows. © 2025 Elsevier B.V., All rights reserved.</description>
      <pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholarworks.korea.ac.kr/kumedicine/handle/2021.sw.kumedicine/78370</guid>
      <dc:date>2025-09-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Artificial Intelligence-Based Classification and Segmentation of Bladder Cancer in Cystoscope Images</title>
      <link>https://scholarworks.korea.ac.kr/kumedicine/handle/2021.sw.kumedicine/76263</link>
      <description>Title: Artificial Intelligence-Based Classification and Segmentation of Bladder Cancer in Cystoscope Images
Authors: Hwang, Won Ku; Jo, Seon Beom; Han, Da Eun; Ahn, Sun Tae; Oh, Mi Mi; Park, Hong Seok; Moon, Du Geon; Choi, Insung; Yang, Zepa; Kim, Jong Wook
Abstract: Background/Objectives: Cystoscopy is necessary for diagnosing bladder cancer, but it has limitations in identifying ambiguous lesions, such as carcinoma in situ (CIS), which leads to a high recurrence rate of bladder cancer. With the significant advancements in deep learning in the medical field, several studies have explored its application in cystoscopy. This study aimed to utilize the VGG19 and Deeplab v3+ deep learning models to classify and segment cystoscope images, respectively. Methods: We classified cystoscope images obtained from 772 patients based on morphology (normal, papillary, flat, mixed) and biopsy results (normal, Ta, T1, T2, CIS, etc.). Experienced urologists annotated and labeled the lesion areas and image categories. The classification model for bladder cancer lesion, annotated with pathological results, was developed using VGG19 with an additional fully connected layer, utilizing sparse categorical cross-entropy as the loss function. The Deeplab v3+ model was used for segmenting each morphological type of bladder cancer in the cystoscope images, employing the dice coefficient loss function. The classification model was evaluated using validation accuracy and correlation with biopsy results, while the segmentation model was assessed using the Intersection over Union (IoU) combined with binary accuracy. Results: The dataset was split into training and validation sets with a 4:1 ratio. The VGG19 classification model achieved an accuracy score of 0.912. The Deeplab v3+ segmentation model achieved an IoU of 0.833 and a binary accuracy of 0.951. Visual analysis revealed a high similarity between the lesions identified by Deeplab v3+ and those labeled by experts. Conclusions: In this study, we applied two deep learning models using well-annotated datasets of cystoscopic images. Both VGG19 and Deeplab v3+ demonstrated high performance in classification and segmentation, respectively. These models can serve as valuable tools for bladder cancer research and may aid in the diagnosis of bladder cancer.</description>
      <pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholarworks.korea.ac.kr/kumedicine/handle/2021.sw.kumedicine/76263</guid>
      <dc:date>2025-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Enhanced Detection Performance of Acute Vertebral Compression Fractures Using a Hybrid Deep Learning and Traditional Quantitative Measurement Approach: Beyond the Limitations of Genant Classification</title>
      <link>https://scholarworks.korea.ac.kr/kumedicine/handle/2021.sw.kumedicine/76340</link>
      <description>Title: Enhanced Detection Performance of Acute Vertebral Compression Fractures Using a Hybrid Deep Learning and Traditional Quantitative Measurement Approach: Beyond the Limitations of Genant Classification
Authors: Lee, Jemyoung; Kim, Minbeom; park, heejun; Yang, Zepa; Woo, Ok Hee; Kang, Woo Young; Kim, Jong Hyo
Abstract: Objective: This study evaluated the applicability of the classical method, height loss ratio (HLR), for identifying major acute compression fractures in clinical practice and compared its performance with deep learning (DL)-based VCF detection methods. Additionally, it examined whether combining the HLR with DL approaches could enhance performance, exploring the potential integration of classical and DL methodologies. Methods: End-to-End VCF Detection (EEVD), Two-Stage VCF Detection with Segmentation and Detection (TSVD_SD), and Two-Stage VCF Detection with Detection and Classification (TSVD_DC). The models were evaluated on a dataset of 589 patients, focusing on sensitivity, specificity, accuracy, and precision. Results: TSVD_SD outperformed all other methods, achieving the highest sensitivity (84.46%) and accuracy (95.05%), making it particularly effective for identifying true positives. The complementary use of DL methods with HLR further improved detection performance. For instance, combining HLR-negative cases with TSVD_SD increased sensitivity to 87.84%, reducing missed fractures, while combining HLR-positive cases with EEVD achieved the highest specificity (99.77%), minimizing false positives. Conclusion: These findings demonstrated that DL-based approaches, particularly TSVD_SD, provided robust alternatives or complements to traditional methods, significantly enhancing diagnostic accuracy for acute VCFs in clinical practice.</description>
      <pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholarworks.korea.ac.kr/kumedicine/handle/2021.sw.kumedicine/76340</guid>
      <dc:date>2025-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Improved Detection Accuracy of Chronic Vertebral Compression Fractures by Integrating Height Loss Ratio and Deep Learning Approaches</title>
      <link>https://scholarworks.korea.ac.kr/kumedicine/handle/2021.sw.kumedicine/75370</link>
      <description>Title: Improved Detection Accuracy of Chronic Vertebral Compression Fractures by Integrating Height Loss Ratio and Deep Learning Approaches
Authors: Lee, Jemyoung; Park, Heejun; Yang, Zepa; Woo, Ok Hee; Kang, Woo Young; Kim, Jong Hyo
Abstract: Objectives: This study aims to assess the limitations of the height loss ratio (HLR) method and introduce a new approach that integrates a deep learning (DL) model to enhance vertebral compression fracture (VCF) detection performance. Methods: We conducted a retrospective study on 589 patients with chronic VCFs. We compared four different methods: HLR-only, DL-only, a combination of HLR and DL for positive VCF, and a combination of HLR and DL for negative VCF. The models were evaluated using dice similarity coefficient, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC). Results: The combined method (HLR + DL, positive) demonstrated the best performance with an AUROC of 0.968, sensitivity (94.95%), and specificity (90.59%). The HLR-only and the HLR + DL (negative) also showed strong discriminatory power, with AUROCs of 0.948 and 0.947, respectively. The DL-only model achieved the highest specificity (95.92%) but exhibited lower sensitivity (82.83%). Conclusions: Our study highlights the limitations of the HLR method in detecting chronic VCFs and demonstrates the improved performance of combining HLR with DL models.</description>
      <pubDate>Fri, 01 Nov 2024 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholarworks.korea.ac.kr/kumedicine/handle/2021.sw.kumedicine/75370</guid>
      <dc:date>2024-11-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

