I am a professor in the ECE department at the Acopian Engineering Center at Lafayette College, and graduated from the University of Toledo in Ohio with a B.S. in Computer Science and Engineering, and a M.S. and a Ph.D. in Electrical and Computer Engineering from Purdue University in Indiana. I have also previously served as a Clinical Systems Engineer at The Christ Hospital for Renovo Solutions in Cincinnati, Ohio, and as a Telemetry Developer Intern at Philips North America in Cleveland, Ohio. I was a nominee for the prestigious Elite 50 award and have published two award winning research papers, two thesis and five research journals. My activities are much beyond my stream of education and research. Outside of academia, I enjoy most of my time being outdoors. During the warmer months here in Pennsylvania, I enjoy mountain biking, hiking, free climbing, kayaking and traveling with friends and family. When forced indoors, I like to watch sci-fi, thriller and sitcom genre movies and TV shows.
A professor in the department of Electrical and Computer Engineering at the Acopian Engineering Center at Lafayette College.
An instructor (PhD student acting as an adjunct professor) in the department of Electrical and Computer Engineering at the Purdue School of Engineering for C and Python Programming courses.
Working independently on R&D of an autonomous robot for IoT Collaboratory at Purdue University Indianapolis, and for military and federal law enforcement agencies (IoT Lab, Advisor: Dr. M. El-Sharkawy).
• A teaching assistant and a mentor in the department of Electrical and Computer Engineering at the Purdue School of Engineering for Dr. El-Sharkawy, Dr. Rizkalla, Professor Shayesteh and Professor Chong Chie.
• Working with undergraduate and graduate students in the engineering discipline on their senior design projects, lab and course work.
• TA'ed:
◦ ECE 568 - Design with Embedded Systems.
◦ ECE 533 - Wireless and Multimedia Computing.
◦ ECE 487 - Senior Design I.
◦ ECE 261 - Advanced C Programming Lab.
◦ ECE 204 - Introduction to Electrical and Electronics Circuits.
Worked on a research project in the field of robotics, embedded systems and software development along with a Ph.D. student to develop a Semi-Autonomous Robot for a major U.S. Defense contractor in Indiana (IoT Lab, Advisor: Dr. M. El-Sharkawy).
• Designed and developed medical technologies including, but not limited to centralized patient monitoring, wireless medical telemetry, and EHR-integrated biomedical devices.
• Worked as part of the TCHHN CE project management team to improve patient care quality/safety, customer efficiency/work-flow usability of medical device technology and customizations that improve these core objectives as well as the adoption of TCHHN applications.
• Assisted in evaluation, recommendation, selection, procurement, integration, installation, and certification of medical devices for TCHHN expansion/renovation projects as well as regulatory inspections.
• Kick-started, trained on-site biomed and technical staff, and helped lead Renovo's Integrated Systems Management (ISM) project through all stages of its lifecycle at over 20 different hospital sites in the US.
• Developed a software prototype to help field-service engineers record different calibration values.
• Developed a database for medical device’s system log files and periodically reviewed the files for instances.
• Assisted local and international software development team by identifying issues and providing a solution.
• Led technical reviews of assigned work packages and helped revise other team members’ technical reviews.
• Specialization: Computer Engineering - AI and Robotics.
• Research Thesis: Deep Neural Networks (Advisor: Prof. Dr. M. El-Sharkawy).
• Notable Achievements:
◦ Published four scientific first-author scholarly journals in PGScience-AJECE and MDPI Special Edition peer-reviewed journals.
◦ Student Ambassador for ECE and OIA departments at IUPUI.
• Specialization: Computer Engineering - AI and Robotics.
• Research Thesis: AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational Resources (Advisor: Prof. Dr. M. El-Sharkawy).
• Notable Achievements:
◦ Published two multi award winning research papers in IEEE conferences.
◦ Participated and presented in international conferences: IEMTRONICS in Toronto, Ontario and CCWC 2021 in Las Vegas, NV.
◦ Attended Intel Industrial IoT Workshop in Indianapolis.
Apart from being an embedded systems and software developer, I enjoy most of my time being outdoors. During the warmer months here in Pennsylvania, I enjoy mountain biking, hiking, free climbing, kayaking and traveling with friends and family.
When forced indoors, I like to watch sci-fi and sitcom genre movies and TV shows. I enjoy exploring different cuisines and I spend a large amount of my free time exploring the latest advancements in the automotive industry.
This dissertation delves into a significant challenge for Autonomous Vehicles (AVs): achieving efficient and robust perception under adverse weather and lighting conditions. Systems that rely solely on cameras face difficulties with visibility over long distances, while radar-only systems struggle to recognize features like stop signs, which are crucial for safe navigation in such scenarios.
See Publication (DOI): 10.25394/PGS.26009968.v1This paper presents NeXtFusion, a novel deep Camera-Radar fusion network designed to enhance autonomous vehicle (AV) perception in challenging weather and lighting conditions. By leveraging the rich semantic information from cameras and the X-ray-like capabilities of radars, NeXtFusion improves object detection and tracking accuracy. Extensive testing on the nuScenes dataset shows NeXtFusion outperforms existing methods with a top mAP score of 0.473 and strong performance in other metrics. This demonstrates NeXtFusion's effectiveness in robust, real-time AV perception and safety.
See Publication (DOI): 10.3390/fi16040114This research paper introduces MobDet3, an efficient lightweight object detection network specifically designed for self-driving vehicles. It uses a modified MobileNetV3 as its backbone and incorporates altered computer vision techniques, aiming to achieve high accuracy and fast inference speeds. Extensive tests show that MobDet3 achieves up to 88.92 frames per second, making it ideal for real-time object detection in autonomous driving.
See Publication (DOI): 10.3390/jlpea13030049This paper introduces NextDet, a modern object detection network specifically designed for efficient monocular sparse-to-dense streaming perception, with a focus on autonomous vehicles and rovers using edge devices. NextDet utilizes CondenseNeXt, a lightweight convolutional neural network algorithm, as its backbone to extract and aggregate image features at different granularities. It also incorporates other novel and modified strategies for object detection and bounding box drawing, adapting to the latest version of the PyTorch framework.
See Publication (DOI): 10.3390/fi14120355This paper introduces CondenseNeXtV2, a modern image classifier that is lightweight and ultra-efficient. It is designed to be deployed on local embedded systems (edge devices) for general-purpose usage. Building upon the award-winning CondenseNeXt paper presented at the 2021 IEEE CCWC, this work incorporates a new self-querying augmentation policy technique on the target dataset and adapts to the latest version of the PyTorch framework and activation functions. The result is improved efficiency in image classification computation and accuracy.
See Publication (DOI): 10.3390/jlpea12010008This paper introduces EffCNet, a novel deep convolutional neural network architecture specifically designed for edge devices with limited computational resources. EffCNet is an improved and efficient version of the CondenseNet CNN, incorporating self-querying data augmentation and depthwise separable convolutional strategies to enhance real-time inference performance and reduce model size, trainable parameters, and Floating-Point Operations (FLOPs). Extensive supervised image classification analyses are conducted on CIFAR-10 and CIFAR-100 benchmarking datasets to verify the CNN's real-time inference performance. Finally, the trained weights are deployed on the NXP BlueBox, an intelligent edge development platform for self-driving vehicles and UAVs, leading to valuable conclusions.
See Publication (DOI): 10.11648/j.ajece.20210502.15This Masters Thesis presents a neoteric variant of a deep convolutional neural network architecture called CondenseNeXt, specifically designed for ARM-based embedded computing platforms with limited computational resources. CondenseNeXt is an improved version of CondenseNet, incorporating depthwise separable convolutions and group-wise pruning to reduce redundant elements without compromising network performance. Cardinality and class-balanced focal loss functions are introduced to alleviate the effects of pruning. Extensive analyses on benchmark datasets (CIFAR-10, CIFAR-100, and ImageNet) are conducted using an ARM-based embedded computing platform, NXP BlueBox 2.0, for real-time image classification. CondenseNeXt achieves state-of-the-art performance with significant reductions in forward FLOPs and can efficiently perform image classification without a CUDA-enabled GPU support on ARM-based computing platforms.
See Thesis Publication: Purdue University, 2021.This paper showcases the implementation of our ultra-efficient deep convolutional neural network architecture, CondenseNeXt, on the NXP BlueBox platform, specifically designed for self-driving vehicles. We highlight CondenseNeXt's outstanding efficiency in terms of FLOPs, tailored for ARM-based embedded computing platforms with constrained computational resources, allowing image classification without requiring a CUDA enabled GPU.
See Publication (DOI): 10.1109/IEMTRONICS52119.2021.9422541This paper introduces CondenseNeXt, a novel variant of deep convolutional neural network architecture aimed at enhancing the performance of existing CNN architectures for real-time inference on embedded systems with limited computational resources. Based on the PyTorch framework, CondenseNeXt demonstrates remarkable efficiency compared to the baseline architecture, CondenseNet, achieving reduced trainable parameters and FLOPs while maintaining a balance between the trained model size (less than 3.0 MB) and accuracy trade-off. The result is an unprecedented level of computational efficiency, making CondenseNeXt a promising solution for real-time inference on embedded devices.
See Publication (DOI): 10.1109/CCWC51732.2021.9375950