This project was created in 2023 as part of a university course on Computer Vision. It utilizes Python 3.10.12 and focuses on existing image processing techniques.
The project expands upon canozcivelek's traffic-sign-recognition, integrating recent advancements.
-
Updated Libraries: I've updated nearly all dependencies to their latest stable versions to ensure the project runs smoothly on Python 3.10.12, benefiting from enhanced features and security.
-
Live OAK-D Camera Integration: A new feature allows live testing of the trained model with the OAK-D camera, paving the way for real-world application and evaluation.
Clone the Repository
To get started, clone this repository using HTTPS:
git clone https://github.com/svndin/image-processing-labSet Up Your Environment
Navigate to the project directory and create a virtual environment:
cd image-processing-lab
python3 -m venv image-processing
source image-processing/bin/activateInstall the Dependencies:
Install the necessary Python packages from the requirements.txt file:
python3 -m pip install -r requirements.txtLaunch Jupyter Notebook
Within the project directory, start Jupyter Notebook:
jupyter notebookNavigate through the interface to open the provided New_trafficSigns.ipynb notebook.
Explore the Notebooks
- New_trafficSigns.ipynb: Start here to understand the project structure and logic.
- Sequential Execution: Follow the notebook, executing code blocks in order to train your model.
- Model Creation: Step-by-step instructions will guide you through creating and saving your traffic sign recognition model.
Final Notes
- After training, compare your model's predictions (Predicted sign: [ ]) with the correct signs listed in signnames.csv.
- The model is not yet fully optimized and can be further improved to enhance accuracy.
- For a detailed guide on setting up the OAK-D camera, refer to the instructions in the next section.
The New_OAK-D.ipynb notebook takes the project to the next level by integrating live traffic sign recognition using the OAK-D camera. This section allows you to apply the traffic sign recognition model in real-world scenarios, showcasing the practical application of the technology.
OAK-D Camera Setup
To use the OAK-D camera for live recognition, use this command for Linux users. (Users of other operating systems should consult the Luxonis documentation.)
sudo wget -qO- https://docs.luxonis.com/install_depthai.sh | bash-
Model Preparation: The project utilizes the model created in New_trafficSigns.ipynb. For ease of use, a pre-trained model, new_traffic_signs_model.keras, is also provided in the repository.
-
Notebook Execution: Open and run the New_OAK-D.ipynb notebook within the Jupyter Notebook environment. This notebook guides you through the process of running the live recognition system using the OAK-D camera.
-
Live Recognition: As you proceed with the notebook, you'll be able to perform live traffic sign recognition. The system uses the camera feed to detect signs and display their classifications in real-time.
Maximize your OAK-D camera's capabilities by incorporating depth functionality with OpenVINO and blobconverter. This process not only enhances model accuracy but also enables depth sensing for superior performance.
Optimizing with OpenVINO
First, install OpenVINO to optimize your model for Intel hardware, which significantly boosts performance. Detailed installation instructions are available on OpenVINO's official documentation page: OpenVINO.
Converting with blobconverter
Next, convert your optimized model to a blob format using blobconverter, ensuring compatibility with the OAK-D. This conversion can be done easily at blobconverter.luxonis.
Deploy and Recognize with Depth
Finally, deploy the optimized and converted model on your OAK-D camera. This upgrade not only enhances traffic sign recognition but also provides insights into their spatial positioning, adding a new dimension of depth and accuracy to your system.
Let's continue to explore, innovate, and transform the world, one traffic sign at a time. Happy coding!
