An innovative technical solution to avoid insomnia and noise-induced hearing loss
sleep monitoring, headphones/earphones, wearables, pulse, mobile application, multimedia playing, insomnia, noise-induced hearing loss, NIHL, heart rate
I am an AI Grad student at Northeastern University. I was previously a Data Engineer at Lowe’s and GE Healthcare. I received my B.Tech from Amrita School of Enginnering, Coimbatore.
sleep monitoring, headphones/earphones, wearables, pulse, mobile application, multimedia playing, insomnia, noise-induced hearing loss, NIHL, heart rate
perishable tracking, IoT, food quality monitoring, food products, food wastage, technology for quality monitoring, food transport
CCDAK is targeted to developers and solutions architects. It covers Confluent and Apache Kafka® with a particular focus on knowledge of the platform needed in order to develop applications that work with Kafka.
Fine tuned a pre-trained transformer Encoder MLP Decoder architecture into semantic segmentation model. Dataset has 18 labels and 17k samples. The model contains about 3.7 million params. Model achieved mIoU of 0.514"
Using llava-1.5-7b-hf 7 billion params multi modal, I fine tuned it into a chat instruction model. The dataset contains both images and text. GPU Nvidia A100 on google colab was used to fine tune this model with Lora technique. To enhance the training speed and reduce compute the model is 4 bit quantized.
This study examines the performance of various deep learning models on a small, custom dataset of 1,800 images across six categories. We compare the efficacy of training models from scratch against using fine tuned pretrained architectures, specifically ResNet-18, VGG-19, and Inception V3. Our results suggest that fine tuned models of pretrained architectures significantly outperform models trained from scratch, offering better accuracy with less computational expense, particularly in data-constrained environments. The implications of these findings advocate for the strategic use of transfer learning in similar small-scale data settings.
The study aims to reduce traffic congestion using AI and computer vision. The proposed methodology shows 40% reduction in average waiting time. A pre-emption system for priority vehicles like ambulances can be integrated. A Deep Q-Learning model will optimize traffic signals by analyzing queue lengths and vehicle speeds from live video feeds. The congestion or vehicle presence is detected, classified and computed into vehicle density using YOLO computer vision CNN model.