0% found this document useful (0 votes)
11 views

Main Project-PPT Script

The document presents a project on 'Automatic Fruit Quality Detection using Deep Learning,' focusing on a system that utilizes MobileNetV2 for real-time classification of fruit quality, specifically bananas. The project outlines objectives, applications, system architecture, and results, demonstrating effective integration of deep learning with a conveyor system for automated sorting. It also includes a literature review, hardware feasibility study, design methodology, and project management aspects, culminating in a successful implementation and testing of the system.

Uploaded by

adhmj33wrk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Main Project-PPT Script

The document presents a project on 'Automatic Fruit Quality Detection using Deep Learning,' focusing on a system that utilizes MobileNetV2 for real-time classification of fruit quality, specifically bananas. The project outlines objectives, applications, system architecture, and results, demonstrating effective integration of deep learning with a conveyor system for automated sorting. It also includes a literature review, hardware feasibility study, design methodology, and project management aspects, culminating in a successful implementation and testing of the system.

Uploaded by

adhmj33wrk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Automatic Fruit Quality Detection using

Deep Learning
Presentation Script
Slide 1: Title Slide

"Good day everyone. Today we're presenting our project on 'Automatic Fruit Quality Detection
using Deep Learning.' I'm Adithyan Manoj presenting on behalf of our team consisting of myself,
Rohith M S, Ruben Davis Saji, and Muhammad Hadhi V M, under the guidance of Ms. Rashida
K, Assistant Professor in the Department of Electronics and Communication."

Slide 2: Contents

"Let me briefly outline what we'll be covering today. We'll start with an introduction to our project,
followed by a literature review of relevant works in this field. We'll then discuss our system
overview, design and implementation details, results we've achieved, and conclude with our
findings. We'll also cover our project management aspects including work schedules, Gantt
charts, and budgeting."

Slide 3: Introduction

"The global fruit industry faces significant challenges in quality control, particularly as demand
increases for efficient and accurate sorting systems. Our project addresses these challenges by
introducing an automated, scalable, and reliable classification system using deep learning.
We've focused on creating a solution that can operate in real-time and be deployed in industrial
settings."

Slide 4: Objectives

"Our project has five main objectives. First, we aim to achieve real-time detection and
classification of fruit quality. Second, we're implementing category-based sorting to separate
fruits by quality grade. Third, we're enhancing accuracy through deep learning models. Fourth,
we're improving environmental monitoring conditions during the detection process. And finally,
we're enabling automated sorting using a conveyor belt system to demonstrate practical
application."

Slide 5: Applications
"This technology has several practical applications. It can be integrated into automated sorting
and grading systems in packaging facilities, optimize supply chain and post-harvest
management, and enhance research and development in agricultural technology. It's particularly
valuable for export quality assurance, where consistent grading is essential."

Slide 6: Scope

"Looking at the broader scope, our system can be expanded to handle multiple fruit types
beyond our initial focus on bananas. It can be integrated with IoT and cloud platforms for remote
monitoring and control. We can enhance quality metrics to include more parameters, develop
adaptive learning models that improve over time, and scale the deployment for industrial use."

Slide 7: Novelty

"What makes our approach novel is the utilization of MobileNetV2 for our deep learning model,
which balances accuracy with computational efficiency. Our system combines detection,
classification, and automated physical sorting in one integrated solution. We've implemented
this on the Jetson Nano platform, enabling efficient, real-time fruit quality analysis and sorting in
a compact, cost-effective package."

Slides 8-15: Literature Review

"We conducted an extensive literature review of ten relevant papers from 2017 to 2024. Each
study contributed valuable insights to our project:

The 2023 study on general machine learning models using Vision Transformers showed the
potential for universal fruit quality assessment models, though with limitations for certain fruits.

The 2020 study on pomegranate growth detection using transfer learning demonstrated high
accuracy with RF models, though with longer training times.

The 2023 real-time oil palm grading system using YOLOv4 and smartphones showed the
potential for mobile deployment.

The 2021 research on fruit quality recognition using deep learning algorithms provided a
detailed CNN architecture.

The 2023 study using Vision Transformers achieved high accuracy for multiple fruit types,
demonstrating good generalizability.

The 2024 research on enhancing fruit quality detection used EfficientB2 CNN models for
improved accuracy and efficiency.

The 2022 study using YOLOv3 achieved 90% precision despite limited training data.
The 2021 review of computer vision methods provided insights on color, texture, and defect
detection techniques.

The 2022 DenseNet201 approach achieved impressive 99.67% accuracy in classification.

And finally, the 2017 study on fruit detection in orchards using Faster R-CNN achieved high
F1-scores for apples and mangoes.

These studies informed our approach and helped us avoid common pitfalls while incorporating
proven techniques."

Slide 16: System Overview

"Now, let's look at our system architecture. Our setup consists of several integrated
components: a USB camera for image acquisition, a pre-processing unit, our deep learning
model running on Jetson Nano, a decision-making module, and the sorting mechanism actuator.
The system operates as a pipeline, with images captured, processed, classified, and then
physical sorting executed based on classification results."

Slide 17-19: Hardware Feasibility Study

"For our computing platform, we selected the Jetson Nano due to its optimal balance of
performance, power efficiency, and cost-effectiveness compared to alternatives like Jetson
Xavier NX or Raspberry Pi with Coral accelerator. The Nano's quad-core ARM processor and
128-core GPU provide sufficient computing power for our deep learning model while remaining
within our budget constraints at approximately ₹27,000.

For image acquisition, we evaluated several camera options and selected the LG VC23GA
offering 1080p resolution at 30fps via a simple USB interface, providing good image quality at a
reasonable cost of ₹3,681.

On the software side, we compared various deep learning architectures and selected
MobileNetV2 for its excellent balance of accuracy and computational efficiency. Compared to
MobileNetV1 and MobileNetV3, MobileNetV2 offers better accuracy for similar computational
complexity, making it ideal for our edge deployment scenario."

Slide 20: Design Methodology

"Our design methodology followed a systematic approach with the system architecture designed
around the Jetson Nano as the central processing unit, conveyor belt mechanisms for fruit
transport, and image capture modules strategically positioned for optimal fruit viewing."

Slide 21: Conveyor Belt System


"The physical system consists of two conveyor belts driven by DC motors. The first belt
transports all fruits past the imaging system, while the second receives fruits classified as 'good'
or 'intermediate' quality. We incorporated a push-back mechanism to divert 'bad' quality fruits off
the main conveyor path. The USB camera is mounted above the first conveyor for real-time
image capture in controlled lighting conditions."

Slide 22-23: Data Collection and Preprocessing

"For our dataset, we collected high-resolution images of bananas in three quality classes: good,
bad, and intermediate. We sourced these images from Kaggle (1,000 images per class),
Roboflow Universe (2,000 images per class), and Mendeley Datasets (4,000 images per class).
To enhance model robustness, we used the Python library Augmentor to expand our dataset
through techniques like rotation, flipping, and brightness adjustments."

Slide 24-26: Model Selection and Training

"We selected MobileNetV2 for our classification model due to its excellent
performance-to-computation ratio and suitability for edge devices like the Jetson Nano. We
implemented the model using PyTorch and employed transfer learning by freezing the base
layers and fine-tuning the fully connected layers, which significantly reduced training time.

For training, we used a learning rate of 5×10^(-3), SGD optimizer, and CrossEntropyLoss
function with a batch size of 8. The training process demonstrated steady improvement in
accuracy over epochs while maintaining reasonable loss metrics."

Slide 27-28: Deployment and Execution in Jetson Nano

"Deploying our model to the Jetson Nano required setting up the appropriate environment with
Python 3.6.9, PyTorch 1.8, TorchVision 0.9, and OpenCV 4.5.1. After training our model using
PyTorch on a more powerful workstation, we transferred the trained model to the Jetson Nano.

Our control logic for the conveyor system is straightforward: fruits classified as 'good' or
'intermediate' quality continue to the second conveyor belt, while those classified as 'bad' quality
are pushed away from the main conveyor path by our actuator mechanism."

Slide 29-32: Hardware Design

"Here you can see our hardware design, showing the complete system setup including both
conveyor belts. The first conveyor belt incorporates our imaging box for controlled lighting
conditions during image capture. The second conveyor belt receives the sorted fruits that have
passed quality inspection."

Slide 33-34: Tools Required

"Our implementation required several hardware components:


●​ Jetson Nano (4GB RAM, 5V operating voltage)
●​ Arduino Uno for additional control functions
●​ 12V relay and AC motor speed regulator for conveyor control
●​ PWM modulator for precise motor speed adjustment
●​ IR proximity sensors for fruit detection
●​ Solid State Relay (SSR 40DA) for high-current switching
●​ LG VC23GA webcam for image acquisition
●​ Servo motors for the pushing mechanism
●​ L298 motor driver for DC motor control"

Slide 35-40: Setting up Environment in Jetson Nano

"Setting up the Jetson Nano environment involved several steps: power setup, flashing JetPack
on an SD card, connecting peripherals, initial setup and first boot, installing all dependencies,
setting up the Python environment, and finally running our ML model.

We compared cloud deployment versus Jetson Nano deployment, noting that while cloud
options offer higher throughput and scalability, the Jetson Nano provides lower latency, better
data privacy, and is more suitable for real-time edge applications.

To optimize performance, we implemented TensorRT, which accelerates deep learning inference


using NVIDIA GPUs. This involved converting our PyTorch model to TensorRT format, enabling
faster GPU execution through CUDA, and supporting real-time deployment on the Jetson Nano.

Our deployment pipeline followed a systematic flow from model training to TensorRT
optimization and finally real-time inference on the Nano."

Slide 41-44: Results - Classified Output of MobileNetV2

"Our results demonstrate the effectiveness of the MobileNetV2 model in classifying banana
quality. Here we can see examples of classification results for 'good,' 'intermediate,' and 'bad'
quality bananas, with the model reporting high confidence scores. For example, the model
classified this good quality banana with 100% confidence, distinguishing it clearly from the other
classes."

Slide 45-46: Confusion Matrix and Validation Graph

"The confusion matrix shows our model's performance across all three classes. As you can see,
the model demonstrates high accuracy in distinguishing between good, intermediate, and bad
quality bananas with minimal misclassifications.

The validation graph illustrates how our model's accuracy improved over training epochs while
maintaining reasonable loss metrics, demonstrating effective learning without overfitting."

Slide 47-50: Physical Implementation


"Here we can see our physical implementation, including the first conveyor belt with the imaging
box, which provides controlled lighting conditions for consistent image quality. The second
conveyor receives the filtered 'good' and 'intermediate' quality fruits, while the bad quality fruits
are diverted by our pushing mechanism."

Slide 51: Environment Setup for Jetson Nano

"This image shows our Jetson Nano setup with all the necessary peripherals and connections
for real-time fruit quality detection and sorting control."

Slide 52: Conclusion

"In conclusion, our system successfully integrates a conveyor system, Jetson Nano, and a
MobileNetV2 deep learning model to enable real-time, high-precision sorting of bananas by
quality. The modular design—including camera, LED lighting, and pushing
mechanism—ensures efficient, automated sorting and adaptability for various grading needs.
This project demonstrates the practical application of deep learning in agricultural automation."

Slide 53-54: Bibliography

"We've referenced several key papers that informed our approach, focusing on recent advances
in fruit quality assessment using deep learning techniques. These references provided valuable
insights into model selection, dataset preparation, and system design."

Slide 55-56: Work Division

"Our team divided the work based on individual strengths. Muhammed Hadhi focused on
hardware research, planning, mechanical design, and testing the conveyor belt. Rohith handled
setting up the Jetson environment and optimizing the ML model. Ruben worked on developing
the YOLO model and real-time analysis environment. Adithyan developed the CNN model and
assisted with integration and testing on the Jetson Nano."

Slide 57-58: Gantt Chart and PERT Chart

"Our project execution followed this Gantt chart, which outlines the timeline for major project
phases from system requirement analysis through final testing and debugging.

Similarly, our PERT chart illustrates the dependencies between different project tasks, ensuring
efficient project management and timely completion."

Slide 59: Overall Budget

"Our total project budget came to ₹21,525, with the Jetson Nano being the most significant
expense at ₹15,000. Other costs included motors, camera, sensors, display, and various
accessories needed for the mechanical assembly."
Slide 60: Milestones Completed

"I'm pleased to report that we successfully completed all nine major project milestones:

1.​ First conveyor belt and imaging box setup


2.​ MobileNetV2 training for banana quality classification
3.​ Setting up the Jetson Nano environment
4.​ Deployment on Jetson Nano
5.​ Real-time testing
6.​ Second conveyor belt construction
7.​ Pushing mechanism construction
8.​ Overall system integration
9.​ Final testing and debugging"

Slide 61: Thank You Slide

"Thank you for your attention. We're now happy to answer any questions about our automatic
fruit quality detection system."

Literature Reviews-Detailed
1. A General Machine Learning Model for Assessing Fruit Quality Using
Deep Image Features
Overview:​

This 2023 study (as per the PDF) developed a general machine learning model using Vision
Transformers (ViT) to assess fruit quality based on deep image features. It focused on visual
characteristics extracted from images to classify fruit quality across multiple types.

Points Considered for Our Project:

●​ The use of deep image features for quality assessment inspired our focus on robust
feature extraction using MobileNetV2.
●​ The general applicability across fruit types influenced our scope to design a system
adaptable to various fruits beyond bananas.

Analysis:
●​ Objective: To create a versatile model for fruit quality assessment using deep visual
features, applicable to diverse fruit types.
●​ Methodology: Employed Vision Transformers to process high-resolution images,
training on a dataset of multiple fruits with labeled quality categories (e.g., good, bad).
●​ Results: Achieved high accuracy for most fruits but struggled with bananas and
pomegranates due to reliance on visual features alone, missing non-visual quality factors
(e.g., texture, taste).
●​ Limitations: Limited performance on fruits with complex quality metrics beyond visual
appearance; computationally intensive due to ViT’s architecture.

2. A Novel Transfer Learning Approach for Pomegranate Growth Detection


Overview:​

This 2020 study utilized transfer learning with a Random Forest classifier to detect pomegranate
growth stages, leveraging pre-trained deep learning models for feature extraction.

Points Considered for Our Project:

●​ Transfer learning’s effectiveness informed our use of pre-trained MobileNetV2,


fine-tuned for banana quality detection.
●​ The focus on a specific fruit encouraged us to tailor our model initially to bananas while
keeping scalability in mind.

Analysis:

●​ Objective: To accurately detect pomegranate growth stages using transfer learning for
efficient feature extraction and classification.
●​ Methodology: Applied a pre-trained CNN (e.g., VGG or ResNet) for feature extraction,
followed by Random Forest classification, trained on pomegranate images across growth
stages.
●​ Results: Achieved 98% accuracy in classifying growth stages, demonstrating transfer
learning’s power with limited datasets.
●​ Limitations: Specific to pomegranates, reducing generalizability; lacked real-time
deployment considerations, unlike our conveyor-based system.

3. Real-Time Oil Palm Grading System Using Mobile and YOLOv4


Overview:​
This 2023 study (assumed based on context) developed a real-time grading system for oil palm
fruits using the YOLOv4 object detection model on mobile devices, targeting ripeness and
quality.

Points Considered for Our Project:

●​ Real-time detection inspired our system’s real-time processing goal on the Jetson Nano.
●​ YOLOv4’s efficiency influenced our consideration of lightweight models, though we
opted for MobileNetV2 for edge compatibility.

Analysis:

●​ Objective: To enable real-time oil palm grading on mobile platforms for field use.
●​ Methodology: Used YOLOv4 for object detection and classification, deployed on mobile
devices, trained on oil palm images with ripeness labels.
●​ Results: High detection speed and accuracy in real-time, suitable for on-site grading.
●​ Limitations: Focused solely on oil palms; mobile deployment may not scale to conveyor
systems like ours; YOLOv4’s complexity could strain edge devices compared to
MobileNetV2.

4. Fruit Quality Recognition Using Deep Learning Algorithm


Overview:​

This 2021 study (as per the PDF) employed a CNN for fruit quality classification, emphasizing
preprocessing techniques like grayscale conversion and segmentation for improved recognition.

Points Considered for Our Project:

●​ Preprocessing techniques informed our use of the Augmentor library for data
preparation.
●​ CNN’s success in quality recognition reinforced our choice of a deep learning approach.

Analysis:

●​ Objective: To classify fruit quality using a CNN with enhanced preprocessing for better
feature extraction.
●​ Methodology: Applied a custom CNN with preprocessing (grayscale, segmentation),
trained on a fruit dataset with quality labels.
●​ Results: Effective for visual quality but struggled with complex fruits and non-visual
attributes (e.g., taste).
●​ Limitations: Limited to visual features; preprocessing complexity may not suit real-time
systems like ours.
5. A General Machine Learning Model for Assessing Fruit Quality Using
Deep Image Features
Overview:​

(Assumed duplicate of #1 unless distinct; treated as same here.) Identical to the 2023 ViT-based
study above.

Points Considered for Our Project:​

(Same as #1) Deep feature extraction and multi-fruit applicability shaped our model design.

Analysis:​

(Same as #1) See above for objective, methodology, results, and limitations.

6. Enhancing Fruit Quality Detection with Deep Learning Models


Overview:​

This study (inferred) explored advanced deep learning models to improve fruit quality detection,
likely focusing on accuracy and robustness across conditions.

Points Considered for Our Project:

●​ Emphasis on enhancing detection accuracy guided our fine-tuning of MobileNetV2.


●​ Robustness considerations influenced our controlled imaging setup.

Analysis:

●​ Objective: To boost fruit quality detection accuracy using advanced deep learning
techniques.
●​ Methodology: Likely used a modern CNN (e.g., ResNet, EfficientNet) with extensive
training on diverse fruit images.
●​ Results: Improved accuracy over traditional methods, adaptable to various fruits.
●​ Limitations: Potentially high computational cost; may not address real-time edge
deployment as we do.
7. Determination of Fruit Quality by Image Using Deep Neural Network
Overview:​

This study (inferred) used a deep neural network (DNN) to assess fruit quality from images,
focusing on quality metrics like ripeness or defects.

Points Considered for Our Project:

●​ DNN’s ability to handle image-based quality assessment supported our deep learning
choice.
●​ Focus on specific quality metrics inspired our classification into good, intermediate, and
bad categories.

Analysis:

●​ Objective: To determine fruit quality using DNNs based on image data.


●​ Methodology: Employed a DNN (possibly CNN-based) trained on fruit images with
quality annotations.
●​ Results: High accuracy for defined quality metrics.
●​ Limitations: May lack real-time capability; specific fruit focus could limit scalability.

8. Fruits and Vegetables Quality Evaluation Using Computer Vision


Overview:​

This study (inferred) applied computer vision techniques, likely including deep learning, to
evaluate the quality of fruits and vegetables.

Points Considered for Our Project:

●​ Broad applicability to fruits and vegetables encouraged our future scope expansion.
●​ Computer vision’s role in quality evaluation aligned with our imaging box design.

Analysis:

●​ Objective: To evaluate quality across fruits and vegetables using computer vision.
●​ Methodology: Combined traditional vision techniques (e.g., edge detection) with deep
learning, trained on a mixed dataset.
●​ Results: Effective for diverse produce but possibly less precise for specific fruits.
●​ Limitations: General approach may dilute accuracy for individual types; real-time
feasibility unclear.
9. Fruit Quality Assessment with Densely Connected Convolutional Neural
Network
Overview:​

This study (inferred) used DenseNet, a densely connected CNN, for fruit quality assessment,
leveraging its efficiency in feature reuse.

Points Considered for Our Project:

●​ DenseNet’s efficiency influenced our consideration of lightweight models, though we


chose MobileNetV2 for edge compatibility.
●​ Feature reuse concept supported our focus on robust feature extraction.

Analysis:

●​ Objective: To assess fruit quality using DenseNet for efficient and accurate
classification.
●​ Methodology: Implemented DenseNet, trained on fruit images with quality labels,
emphasizing feature connectivity.
●​ Results: High accuracy with reduced parameters compared to other CNNs.
●​ Limitations: DenseNet’s complexity may hinder real-time edge deployment;
fruit-specific focus unclear.

10. Deep Fruit Detection in Orchards


Overview:​

This study (inferred) focused on detecting fruits in orchard environments using deep learning,
likely for harvesting or monitoring.

Points Considered for Our Project:

●​ Detection in complex environments informed our imaging box’s controlled conditions.


●​ Orchard application inspired potential future outdoor extensions.

Analysis:

●​ Objective: To detect fruits in orchards using deep learning for agricultural automation.
●​ Methodology: Likely used a detection model (e.g., YOLO, Faster R-CNN) trained on
orchard images.
●​ Results: Effective detection in natural settings, supporting automation.
●​ Limitations: Outdoor focus differs from our controlled conveyor setup; may not address
quality classification.

Summary of Influence on Our Project

Your project (Automatic Fruit Quality Detection Using Deep Learning) integrates insights from
these papers as follows:

●​ Objective Alignment: Inspired by general and specific quality assessment goals


(Papers 1, 2, 5, 6, 7, 9), we aimed for real-time, accurate banana quality detection with
scalability potential.
●​ Methodology Inspiration: Transfer learning (Paper 2), preprocessing (Paper 4), and
efficient models (Papers 3, 9) shaped our use of MobileNetV2 on Jetson Nano with
Augmentor preprocessing.
●​ Results Benchmarking: High accuracies (Papers 1-10) set a target, achieved with 83%
overall accuracy in our system.
●​ Limitations Addressed: We overcame real-time deployment issues (Papers 1, 4, 6, 7,
9) with edge computing and conveyor integration, and plan to expand beyond visual
features (Papers 1, 4).

You might also like