鶹ýӳ

Mission

The mission of the Department of Computer Engineering and Sciences is to prepare computing, engineering and systems  students for success and leadership in the conception, design, management, implementation and operation of complex engineering problems, and to expand knowledge and understanding of computing and engineering through research, scholarship and service.

Electrical and Computer Engineering

CALI - Cobot Autonomous Living Interface



Team Leader(s)
Nicholas Santamaria

Team Member(s)
Heber Lopez, Berke Dogan

Faculty Advisor
Dr. Edward L. Caraway




CALI - Cobot Autonomous Living Interface  File Download
Project Summary
The CALI (Computer-Assisted Living Interface) system is a low-cost, assistive robotic feeding solution designed to improve independence for individuals with motor impairments. Built on the AR4 6-DOF robotic arm platform, the system integrates computer vision, depth sensing, and custom control software to detect food items, track user position, and autonomously guide a utensil for feeding. A neural network processes visual data in real time, enabling accurate food localization, while a depth camera provides spatial awareness for safe and precise motion. The system is controlled through a custom user interface and implemented using ROS 2 (Robot Operating System 2) for modular, scalable operation. By combining accessible hardware with intelligent software, CALI demonstrates a practical and affordable approach to assistive robotics, with strong potential for further development in healthcare and at-home support applications.


Project Objective
The objective of the CALI project is to design and develop a low-cost, assistive robotic feeding system that enables individuals with motor impairments to eat independently and safely. This system aims to integrate computer vision, depth sensing, and robotic manipulation to detect food, track user position, and execute precise, autonomous movements using a 6-DOF robotic arm. A key goal is to create a modular and scalable platform controlled through a custom user interface and built on ROS 2, allowing for real-time operation and future expansion. Ultimately, the project seeks to demonstrate that advanced assistive technology can be both accessible and effective, providing a practical foundation for further development in healthcare and at-home support environments.

Manufacturing Design Methods
The CALI system was developed using an iterative design process that combined mechanical fabrication, electronics integration, and software testing to create a functional assistive feeding prototype. The project was built around the AR4 robotic arm platform, with custom-designed components modeled in CAD and fabricated primarily through 3D printing to support rapid prototyping, low cost, and easy modification. Multiple utensil-mount concepts and support components were designed, tested, and refined to improve fit, usability, and overall appearance, including custom housings and attachments tailored to the system’s needs. On the electrical and control side, the design integrated depth-sensing hardware, embedded control components, and a custom interface to connect sensing, processing, and robotic motion into a single system. Throughout development, subsystems were repeatedly evaluated and adjusted based on testing results, allowing the team to improve reliability, functionality, and manufacturability while maintaining a modular design approach.

Specification
The CALI system is designed as a 6-degree-of-freedom (6-DOF) assistive robotic platform built on the AR4 robotic arm, capable of precise and repeatable motion suitable for feeding tasks. The system operates using stepper motor-driven joints with encoder feedback for improved positioning accuracy, achieving sub-centimeter end-effector precision within its working envelope. A depth-sensing camera provides real-time spatial data, enabling accurate food localization and user tracking within a typical operating range of approximately 0.2 to 1.0 meters. The control architecture is implemented using ROS 2, allowing modular communication between perception, planning, and actuation nodes. The vision system utilizes a trained neural network for object detection, running on a standard computing platform capable of real-time inference. Custom 3D-printed PLA components are used for the utensil mount and protective enclosures, ensuring lightweight and cost-effective fabrication. The system is powered by standard DC power supplies and interfaces with a custom-built user interface for manual control, calibration, and automated operation modes.

Analysis
The CALI system demonstrates that a low-cost, modular assistive robotic platform can effectively integrate computer vision, depth sensing, and robotic control to perform feeding tasks with reasonable accuracy and reliability. By leveraging ROS 2, the system achieves real-time communication between perception and actuation, validating the feasibility of combining modern software frameworks with accessible hardware like the AR4 arm. While testing showed strong performance in detecting food and guiding motion, limitations such as sensitivity to lighting conditions, system latency, and the inherent constraints of a non-human-centered robotic platform highlight areas for improvement. Overall, the project confirms the potential for affordable assistive robotics while identifying key opportunities for enhancing robustness, safety, and user adaptability.

Future Works
Future work for the CALI system will focus on improving reliability, safety, and user adaptability to move closer to real-world deployment. Enhancements to the computer vision pipeline, including more robust neural network training and expanded datasets, will improve accuracy across varying lighting conditions and a wider range of food types. Reducing system latency through optimized processing and more efficient communication within ROS 2 will enable smoother and more responsive motion. Mechanical upgrades to the AR4 platform, such as improved safety features, softer end-effectors, and more ergonomic utensil designs, will make the system better suited for direct human interaction. Additional developments may include user-specific calibration profiles, voice or gesture-based controls, and expanded autonomy for tasks beyond feeding. Ultimately, future iterations aim to refine the system into a more robust, intuitive, and clinically viable assistive technology.


Acknowledgement
The team would like to thank the Machine Learning Team (Aruna Dookeran, Michael Yanke, Kari Voelstad Bogen, Levent Kahveci), as well as Dr. Caraway and TA Elis for their support throughout the senior design process.




Electric Vehicle



Team Leader(s)
Terence Lee

Team Member(s)
Troy Stephens, Corbin Williams

Faculty Advisor
Dr. Edward L. Caraway




Electric Vehicle  File Download
Project Summary
The mission of this multi-generational project is to research, design, and implement a traction inverter system for an electric vehicle engine, specifically utilizing the M-9000-MACHE "Eluminator" front drive unit. The project focuses on developing a comprehensive hardware and software framework capable of driving a modern 3-phase motor using high-efficiency Silicon Carbide (SiC) MOSFET semiconductor technology. The scope of this work encompasses the design of the traction inverter, the implementation of an integrated thermal management system, and the development of a dedicated power supply system. Additionally, the project will integrate real-time condition monitoring to track engine health and operational status.


Project Objective
The primary objective of this project phase is to advance the development of the traction inverter system by transitioning from custom-prototype hardware to industrial-grade evaluation platforms. Building upon the control architecture established by previous teams, this project aims to achieve a controlled motor spin of the electric drive unit. Specific objectives include: • Hardware Modernization: Replacing legacy custom-designed gate-driver boards with Texas Instruments UCC21520EVM evaluation modules and utilizing the LAUNCHXL-F280025C (C2000™ microcontroller) to improve signal reliability and system protection. • System Validation: Verifying the operational integrity of the existing SiC MOSFET power stage and ensuring compatibility with the new gate-drive signals and the EVAL-AD2S1210SDZ resolver interface. • High-Voltage Isolation: Implementing bus terminal connections for the MOSFETs using polyimide film for robust electrical insulation and thermal stability between high-voltage conductors. • Mechanical Integration: Designing and fabricating a custom, CAD-validated mounting board to securely house the 3-phase power electronics and control modules while ensuring proper alignment for torque transfer. • Circuit Protection: Implementing an RCD snubber circuit to suppress high-frequency voltage transients and protect power semiconductors from inductive kickback. • Controlled Bench Spin: Executing a phased bring-up and testing plan to confirm proper gate-drive behavior, leading to the successful rotation of the Ford Mustang Mach-E electric motor under deterministic PWM control.

Manufacturing Design Methods
• 3-Phase System CAD & Mounting: Developed custom 3D models for a dedicated mounting board and the 3-phase system layout. The CAD work focused on optimizing the placement of the inverter bridge and control boards to minimize electromagnetic interference (EMI) and ensure secure mechanical fastening. • High-Voltage Terminal Construction: Utilized bus terminal connections for the MOSFET power stage. Polyimide insulation was applied to isolate the voltage bus bars from the mounting infrastructure, providing high dielectric strength and heat resistance. • Protection Circuitry Fabrication: Engineered and integrated an RCD (Resistor-Capacitor-Diode) snubber circuit specifically designed to damp voltage spikes caused by parasitic inductance in the 3-phase bridge during high-speed switching. • Control & Sensing Integration: Interfaced the LAUNCHXL-F280025C real-time controller with the EVAL-AD2S1210SDZ evaluation board to enable high-precision resolver-to-digital conversion for accurate rotor position tracking. • Gate Drive Implementation: Integrated the UCC21520EVM dual-channel gate drivers to provide the necessary isolation and drive current to reliably toggle the SiC MOSFETs while maintaining safety between the control and power tiers.

Specification
• Motor: Ford Mustang Mach-E Front Drive Unit (M-9000-MACHE) • Control MCU: TI LAUNCHXL-F280025C (C2000™ Real-time Controller) • Gate Driver: TI UCC21520EVM (Isolated Dual-Channel Evaluation Module) • Position Sensing: EVAL-AD2S1210SDZ (High-resolution Resolver-to-Digital Converter) • Power Stage Protection: Integrated RCD Snubber Circuit • Switching Technology: Silicon Carbide (SiC) MOSFETs • Isolation Material: Polyimide (Kapton) film • Mechanical Infrastructure: Custom CAD-validated 3-phase mounting board

Analysis
• Transient Mitigation: Identified the need for an RCD snubber to clamp voltage spikes caused by parasitic inductance in the 3-phase bridge. • Signal Compatibility: Verified that the LAUNCHXL-F280025C PWM outputs match the input requirements and dead-time logic of the UCC21520EVM drivers. • Material Insulation: Selected polyimide for its high dielectric strength to prevent arcing between high-voltage terminals and the chassis. • Resolver Mapping: Configured the EVAL-AD2S1210SDZ resolution to match the Ford Mach-E resolver characteristics for accurate commutation. • Mechanical Clearance: Used CAD models to ensure proper "creepage and clearance" distances between high-voltage conductors for safety.

Future Works
• Thermal Design Research: Continue investigating advanced cooling strategies to manage heat dissipation during high-load operation. • Proprietary System Integration: Research methods to interface with Ford's proprietary communication and safety protocols. • Power Optimization: Develop strategies to refine power delivery efficiency while reducing the overall physical footprint of the system. • Mechanical Advisement: Seek expert consultation on mechanical impacts and structural stresses on the system.


Acknowledgement
Special thanks to our faculty advisor, Dr. Edward L. Caraway, and Graduate Student Assistant Elis Karcini for their guidance and support. We also acknowledge the dedicated efforts of the project team members whose contributions were essential to the advancement of this system.




Microgravity Simulator



Team Leader(s)
Alexander Montano

Team Member(s)
Aruna Dookeran, Elias Orellana, Aiden Smart

Faculty Advisor
Dr. Andrew G. Palmer

Secondary Faculty Advisor
Dr. Edward L. Caraway



Microgravity Simulator  File Download
Project Summary
This project involves the design and development of an automated two-axis microgravity simulator (clinostat) for biological research. The system uses continuous dual-axis rotation to simulate microgravity conditions on Earth. It integrates mechanical, electrical, and software components, including stepper motors, slip rings, LED lighting, and a Raspberry Pi-controlled touchscreen interface.


Project Objective
The objective of this project is to design and build an automated microgravity simulator capable of continuous dual-axis rotation. The system aims to provide consistent lighting, real-time user control, and reduced manual intervention through an integrated touchscreen interface.

Manufacturing Design Methods
The system was designed using Fusion 360 and fabricated primarily through 3D printing. A dual-axis frame was developed to support continuous rotation using two NEMA 17 stepper motors. Slip rings were incorporated to allow power transfer during rotation, while electrical components such as the motor driver HAT, MOSFET, and power supplies were integrated into a compact enclosure. A Raspberry Pi was used for system control and user interaction.

Specification
Dual-axis rotation system Two NEMA 17 stepper motors 24V LED lighting with PWM control Raspberry Pi-based control system Touchscreen interface for user interaction Slip rings for continuous rotation Separate 12V (motors) and 24V (LEDs) power systems

Analysis
The system successfully integrates mechanical rotation, electrical control, and user interface design into a single platform. The use of slip rings allows continuous operation without wire interference, while the touchscreen interface improves usability by enabling real-time adjustments. Initial testing demonstrates stable LED control and system responsiveness, with ongoing work focused on motor performance and full system integration.

Future Works
Future work includes completing system integration, performing extended testing, and optimizing performance for long-duration operation. Additional improvements may include enhanced automation features, improved thermal management, and further refinement of the user interface.

Other Information
This project provides an accessible and cost-effective platform for simulating microgravity conditions on Earth. The system is designed to support continued development and future enhancements for expanded research applications.

Acknowledgement
The team would like to thank Dr. Palmer for his guidance and project direction, as well as Dr. Caraway and TA Elis for their support throughout the senior design process. Additional thanks to 鶹ýӳ and the OEC lab for providing resources for fabrication and testing.




Computer Science and Software Engineering

Cardiac Image Reconstruction From Low-Dosage SPECT Scans via Deep Learning



Team Leader(s)
Timothy Shane

Team Member(s)
Evan Gunderson, Alex Thomas

Faculty Advisor
Dr. Marius Silaghi

Secondary Faculty Advisor
Dr. Philip Chan



Cardiac Image Reconstruction From Low-Dosage SPECT Scans via Deep Learning  File Download
Project Summary
Single Photon Emission Computed Tomography (SPECT) scans are a type of medical scan where a patient is injected with a radiotracer which emits gamma rays that are detected by gamma ray cameras that rotate around the patient. The result of this is a series of images taken from different angles, collectively referred to as a sinogram. The sinogram is then converted into a 3D volume of the patient using techniques such as Maximum Likelihood Expectation Maximization (MLEM). This takes a long time, and using a neural network would speed it up and allow for a lower dosage of radiotracer to be used. An autoencoder with attention was trained on 72,000 simulated patients. This model can be used to reconstruct cardiac images instantaneously.


Project Objective
Quickly and accurately reconstruct SPECT scans with low dosages of radiotracer via a neural network.

Manufacturing Design Methods
For generating the training data, a simulation of the GE Infinia SPECT scanner was created in Gate10, which is a medical physics simulator. eXtended CArdiac-Torso (XCAT) patient phantoms with low radiotracer dosage were scanned in the simulation to produce images of the patient, which were captured by simulated rotating gamma ray cameras every 3 degrees, producing 120 images, or 120-slice sinograms. Augmentations were performed on the sinograms to increase the amount of training data. Augmentations included translations, z-axis rotations, subresolution, and poisson noise. An autoencoder with attention was then trained on the sinograms as inputs, and volumes reconstructed via MLEM as outputs. Structural Similarity Index Measure (SSIM) was used to evaluate the accuracy of the reconstructed scans.

Specification
Model: Our model we trained for this project was an autoencoder-decoder with self-attention blocks. Simulation: In our simulation, a patient cropped to their target organs using a fixed window that always contained the target organs was dosed with 7 MBq of Tc99m and then 120 images were taken each exposed for 5 seconds across 3 degrees. This is distributed across 4 cameras leading to a total simulation scan time of 150 seconds and a full 360 degree patient coverage. The gamma camera is an exact replica of the GE Infinia SPECT Hawkeye 4 camera with the LEHR collimator and the thallium-activated sodium iodide (NaI(Tl)) crystal with a ⅜” thickness.

Analysis
We achieved an RMSE of 0.002743, and a SSIM of 0.943770, displaying how neural networks are competent at performing sinogram reconstructions. Our inference time is near instant, vastly surpassing the standard 45 minutes required by MLEM reconstructions. The visual comparison shows a clearer reconstruction than traditional iterative methods, especially for the heart. Overall, if further training and tuning is pursued with real patient data, it is feasible that neural network reconstruction could replace MLEM.

Future Works
Training our model on large amounts of real data could allow it to be used by clinicians to reconstruct SPECT scans in much shorter timeframes. It would also allow doctors to use lower dosages of radiotracer while still getting a quality scan.

Other Information
https://athomas2022.github.io/SeniorDesignSite/

Acknowledgement
Acknowledgement: Thank you to our advisors: Dr Marius Silaghi and Dr. Debasis Mitra. Thank you to our mentors: Sammy Morries Boddepalli, Tommy Galetta, and Youngho Seo, and the National Institutes of Health for financial support.




Competitive Programming Primer




Team Member(s)
Pedro Marcet, Ivan Marriott, Jon Ayuco

Faculty Advisor
Dr. Rhaguveer Mohan

Secondary Faculty Advisor
Dr. Philip Chan



Competitive Programming Primer  File Download
Project Summary
There is little popularity for competitive programming despite how helpful and fun it can be for CS students and other programmers. We tackle this issue on with 3 products: An algorithm visualizer, a problem repository, and a problem cataloguer. Our algorithm visualizer is meant to be used with already existing code! With minimal modification to a python script and by using our program one can visualize arrays, pointers, and graphs in such a way you can visualize your code. The problem repository is a database made to contain catalogued versions of all past International Collegiate Programming Contest (ICPC) problems. It allows for searching by year, competition, region, level, and even problem types (Dynamic Programming, Ad Hoc, etc.). The problem cataloguer is made as a simple way to add and catalogue new problems into the database. Comes with a built in Kattis scraper (the website that hosts ICPC problems) to auto-fill most information about any given problem, leaving just the categorizing job. All these features are made available in a single website.




Specification
The visualizer was made in Python utilizing the Manim and numpy libraries. It uses a custom array and graph node implementation that automatically creates the visual representations. For non-tree graphs a gradient descent algorithm is used to keep a viable size. The database is hosted on Supabase. The cataloguer is made in Java utilizing these tools and libraries: Maven, JavaFX, Jackson, Jsoup, and PostgreSQL JDBC Driver. The website is hosted on Netlify. GitHub was used as the main version control.


Future Works
How the cataloguer is used to maintain the database and by who is left to the discretion of Dr. Mohan, our advisor and client. The scraper does not support non-standard problems (standard problems have one input from STD in and one output to STD out). The website could support LaTeX.


Acknowledgement
We would like to express our gratitude to Dr. Raghuveer Mohan, our academic advisor and client who inspired us to make this project. Further thanks to Dr. Philip Chan, our secondary advisor, and the developers of all the tools, softwares, and libraries mentioned in Specifications.




FITARNA



Team Leader(s)
Jacob Hall-Burns

Team Member(s)
Vincenzo Barager, Dathan Dixon, Jacob Hall-Burns, Ethan Wadley

Faculty Advisor
Eraldo Ribeiro

Secondary Faculty Advisor
Philip Chan



FITARNA  File Download
Project Summary
FIT AR Navigation App (FITARNA) is an indoor augmented reality navigation system designed to help students and visitors navigate complex campus buildings such as the Evans Library. The project uses AR-based spatial mapping, indoor localization, and real-time route visualization to provide an intuitive wayfinding experience where traditional GPS systems fail. By combining Unity, Vuforia Area Targets, AR Foundation, and custom pathfinding, the system overlays directional guidance and navigation markers directly onto the real-world environment. Problem Statement: Large academic buildings can be difficult for new students and visitors to navigate. Existing outdoor navigation systems, such as GPS, do not work reliably indoors due to signal attenuation and limited floor-level accuracy. As a result, users often struggle to find specific rooms, study areas, help desks, or library wings efficiently. Project Objective: The objective of FITARNA is to create an indoor AR wayfinding application that provides accurate, real-time navigation without GPS. The system aims to achieve high-precision indoor localization, display intuitive 3D guidance overlays, and allow users to search for points of interest within a campus building. Manufacturing / Design Methods: The project was developed by first scanning the Evans Library using Vuforia Area Targets to build a high-fidelity digital representation of the environment. Unity’s AR Foundation was integrated with the Vuforia Engine to support augmented reality features and robust area recognition. A custom NavMesh was implemented in Unity for shortest-path route calculation, and a spatial UI was designed to project AR markers and breadcrumb-style directional indicators into the user’s physical surroundings. Specifications: The system is designed to support indoor navigation in complex academic environments with sub-meter localization accuracy. It includes searchable campus points of interest such as library wings, study rooms, and help desks. The software stack includes Unity, AR Foundation, Vuforia Engine, ARCore/ARKit support, and a custom Unity NavMesh-based routing system. Analysis: The project demonstrates that augmented reality can improve indoor wayfinding by more naturally bridging digital maps and physical spaces than conventional navigation tools. Using spatial mapping and area targets allows the system to maintain persistent tracking and deliver more precise indoor guidance than GPS. The custom route calculation and 3D overlays create a more intuitive experience for users navigating complex buildings. Future Works: Future work could include expanding the system to additional campus buildings, improving route adaptability in changing indoor conditions, enhancing the searchable point-of-interest database, and refining localization accuracy and user interface responsiveness. Additional features such as accessibility-aware routing or multi-floor route optimization could also strengthen the system. This part goes a bit beyond the exact poster text, but it follows naturally from the project scope shown in the presentation. Acknowledgement: This project was completed by Vincenzo Barager, Dathan Dixon, Jacob Hall-Burns, and Ethan Wadley, with faculty advising from Eraldo Ribeiro in the Department of Computer Science at 鶹ýӳ. Other Information: FITARNA focuses on solving indoor navigation challenges in environments where GPS is unreliable. Its main innovation is the use of AR-based spatial understanding and real-time visual guidance to create a seamless indoor wayfinding experience for campus users.












Java OO Visualizer



Team Leader(s)
Ashley McKim

Team Member(s)
Ashley McKim, Darian Dean, Simon Gardling

Faculty Advisor
David Luginbuhl

Secondary Faculty Advisor
Philip Chan



Java OO Visualizer  File Download
Project Summary
The Java OO Visualizer is a browser-based educational tool built for students in introductory Java courses. It automatically parses Java source code, generates class-relationship diagrams, and animates execution step-by-step using memory diagrams, showing object creation, reference assignment, and method invocation in real time. The tool also includes a manual diagram creator and an automated diagram comparer that gives students instant feedback on their work. No installation is required; everything runs in the browser via a Rust backend compiled to WebAssembly.


Project Objective
The goal of the Java OO Visualizer is to bridge the gap between abstract OO concepts and concrete understanding by providing an interactive, animated memory diagram tool driven by real Java source code. Specifically, the project aimed to build a system that automatically generates class diagrams from any Java input, simulates execution of a main method step-by-step with visible heap and stack state, provides a drag-and-drop diagram creator for students to draw their own OO diagrams, compares student-drawn diagrams against actual code and reports mistakes with hints, and runs entirely in the browser with no setup required on the student's part.

Manufacturing Design Methods
The backend is written in Rust and uses the tree-sitter library with a Java grammar to parse source code into a concrete syntax tree. A two-pass static analyzer extracts classes, interfaces, fields, methods, constructors, and inter-class relationships. A separate execution flow engine symbolically simulates the main method, stepping into user-defined method and constructor bodies, evaluating expressions, tracking field mutations, and resolving branch conditions and loop counts. Both pipelines emit DOT graph descriptions, which are serialized to JSON and passed to the frontend. The backend is compiled to WebAssembly using Emscripten, exposing three C-ABI functions callable from JavaScript. The frontend is built with vanilla JavaScript ES modules. It loads the WASM module at startup, calls the exported functions on every code change, and renders the resulting SVG diagrams. The Diagram Creator is a custom canvas-based drawing engine with quadratic Bézier connectors, continuous border snapping, zoom/pan, and lasso selection. The Diagram Comparer runs a lightweight client-side Java parser and diff engine entirely in the browser. Builds are automated through a GitHub Actions CI workflow using a NixOS flake for reproducible Emscripten and Rust toolchain management.





Acknowledgement
The team would like to thank Dr. David Luginbuhl of the 鶹ýӳ Department of Computer Sciences and Cybersecurity for his guidance and support throughout the project as faculty advisor. The project makes use of several open-source libraries and tools whose authors deserve recognition: the tree-sitter project and its Java grammar maintainers, the Graphviz teams, the CodeMirror editor project, the Bootstrap framework, the pako compression library, the Emscripten toolchain, and the Rust programming language and its ecosystem.




LTE and Wifi Operated Car




Team Member(s)
Nicholas Shenk, Christian Prieto, Joseph Digafe, Donoven Nicolas

Faculty Advisor
Marius Silaghi

Secondary Faculty Advisor
Philip Chan



LTE and Wifi Operated Car  File Download
Project Summary
The Remotely Controlled Car via LTE/Wi-Fi project is a distributed, low-latency teleoperation system designed to enable safe exploration of hazardous or inaccessible environments. The system allows an operator to remotely control a rover while receiving a real-time first-person video feed and live telemetry, reducing the need for humans to physically enter dangerous areas. The architecture consists of a rover-side system (Raspberry Pi, camera, LTE/Wi-Fi module, and motor controller) and a client-side operator application. The rover captures and encodes video, transmits it over a custom UDP-based protocol, and receives encrypted control commands. The operator interface displays live video, system metrics (latency, jitter, packet loss), and provides control via keyboard, gamepad, or steering wheel. A key focus of the project is achieving low-latency communication (


Project Objective
The objective of this project is to design and build a secure, low-latency remotely controlled car that can operate over both Wi-Fi and LTE. The system should provide real-time video, reliable control, and live telemetry, while maintaining stable performance even under poor network conditions. Ultimately, the goal is to allow operators to safely explore and gather information from hazardous environments without needing to be physically present.



Analysis
The system was evaluated based on latency, reliability, and usability in real-time conditions. Testing showed that using a UDP-based approach allows for low-latency video streaming and control, making the system responsive enough for remote operation. Network performance varies depending on conditions, with Wi-Fi providing lower latency and LTE offering extended range. The failover mechanism ensures continuous operation by switching networks when needed, improving reliability. Telemetry data helps monitor system performance and adjust behavior dynamically, such as adapting video quality to maintain stability. Overall, the system meets its goal of providing a responsive and reliable remote control platform, though performance can still be affected by network instability, making optimization of streaming and compression an area for future improvement.

Future Works
Future improvements will focus on optimizing the video streaming pipeline to further reduce latency and improve quality under unstable network conditions. The system can also be extended to support additional platforms such as drones, robotic arms, or other remote-operated devices. Other areas include improving the UI with better controls and visualization, enhancing failover and network resilience, and strengthening the security implementation. Additional testing in real-world environments can be done to validate performance and reliability at scale.

Other Information
https://christianprieto243.github.io/RemotelyControlledCar/





NoCap: Article fact checking using AI



Team Leader(s)
Joshua Pechan

Team Member(s)
Joshua Pechan, Anthony Ciero, Thomas Chamberlain, Varun Doddapaneni

Faculty Advisor
Marius Silaghi

Secondary Faculty Advisor
Philip Chan



NoCap: Article fact checking using AI  File Download
Project Summary
his project focuses on streamlining the process of fact-checking online articles using AI. Users can input a URL into the platform, which analyzes the article and generates a comprehensive report including a trustworthiness score, a concise summary, and a detailed analysis. Each report is stored in a database for future access and efficiency. Additionally, a Chrome extension enables users to generate these reports directly while browsing, providing seamless, real-time fact-checking without leaving the webpage.



Manufacturing Design Methods
The system was designed as a full stack application which utilizes AWS amplify, AWS bedrock, AWS dynamoDB, and react.js. The articles are scraped and preprocessed to extract relevant textual content. The cleaned text is then analyzed using an AI model and a scoring algorithm is used to generate an overall trustworthiness score. The database is used to store previously analyzed articles and their corresponding reports, which allows for quick retrieval and reduces redundant computations.

Specification
Input: Article URL Output: Trustworthiness score, summary, and detailed analysis report Platform: Web application and Chrome extension Backend: AI/ML model for text analysis and classification Database: Stores article reports for reuse and scalability Performance: A couple of seconds if it is the first time analyzing the article but to see a previously analyzed article it is nigh instant. Compatibility: Works across major web browsers

Analysis
Analysis: The system was evaluated based on accuracy, response time, and usability. The initial testing showed that the AI model is effective at identifying potentially misleading or factually incorrect data. There have been user tests and evaluations as well as a accuracy test using known truths to determine accuracy.

Future Works
Future improvements could focus on enhancing model accuracy by utilizing larger and more diverse datasets, including real-time fact-checking sources. Integrating external APIs from established fact-checking organizations could improve reliability. There could also be more accuracy tests using more known truths.

Other Information
Website: https://main.d1kku51l1ickza.amplifyapp.com/

Acknowledgement
We would like to thank our faculty advisors, Marius Silaghi and Philip Chan, for their guidance and support throughout this project. Their expertise and feedback were instrumental in shaping the direction and implementation of this system.




Panther Shuttle App



Team Leader(s)
Joseph Hilte

Team Member(s)
Joseph Hilte, Tony Arrington, Jonathan Suo, Chase Monigle

Faculty Advisor
Khaled Slhoub

Secondary Faculty Advisor
Philip Chan



Panther Shuttle App  File Download
Project Summary
We developed a mobile Android application to improve the on-campus shuttle experience for students, drivers, and managers. The application supports three user roles. Students can view the live shuttle location, check the daily shuttle schedule, receive driver notifications, and save favorite stops and times. Drivers can view the route, see the next scheduled stop, estimate how many students may be waiting at a stop based on favorite-stop data, and send notifications to students. Managers can add, edit, and remove shuttle stops on the map and create or update the shuttle schedule for each day of the week. These updates are shared with the student and driver sides of the app through Firebase so that all users see the most current route information. The goal of the project is to provide a more organized shuttle system and encourage greater student use of campus transportation.


Project Objective
The objective of this project was to create an Android-based shuttle application that provides real-time and schedule-based information for students while also giving drivers and managers tools to manage communication, stops, and routes. The app is intended to improve convenience, reduce uncertainty, and make campus shuttle transportation more efficient and easier to use.

Manufacturing Design Methods
The application was designed and developed using Android Studio with Kotlin for the front-end and Firebase for backend services such as authentication, Firestore database storage, and live data synchronization. Google Maps was integrated to display shuttle location and stop markers visually. The system was divided into three main interfaces based on user role: student, driver, and manager. A modular design approach was used so that scheduling, stops, and notifications could be managed centrally and reflected across all user views. Testing was performed throughout development to verify navigation, Firebase connectivity, schedule updates, and notification behavior.

Specification
Platform: Android mobile devices Programming Language: Kotlin/Xml Development Environment: Android Studio Backend Services: Firebase Authentication and Cloud Firestore Mapping Service: Google Maps API User Roles: Student, Driver, Manager Student Features: live map, daily schedule, favorite stops, notifications, estimated student count at next stop Driver Features: live location sharing, route view, next-stop view, stop-based notifications Manager Features: add/edit/delete stops, update daily schedules, manage route data across the app

Analysis
The project demonstrates how a simple mobile system can improve communication and visibility in the campus shuttle environment. By allowing managers to control the official stop locations and schedule, the app reduces inconsistencies across users. Firebase integration enables real-time updates so that schedule and stop changes are reflected without manually updating each device and theuse of favorite-stop data also adds a predictive element by estimating student demand at upcoming stops. Overall, the design supports better decision-making for drivers and more reliable information for students.

Future Works
Future improvements could include automatic background notifications even when the app is fully closed, more advanced delay prediction using live shuttle movement, manager analytics dashboards for stop demand trends, and stronger role-based security to limit certain actions to approved driver or manager accounts only. Additional features such as route history, accessibility settings, and support for multiple shuttle routes could also be added in future versions. We also hope for it to be able to be implanted on the IPhone.


Acknowledgement
We would like to acknowledge our instructor, advisor, and all others who provided feedback and support during the design and development of this project. We also acknowledge the use of Android Studio, Firebase, and Google Maps as essential tools that made this project possible.




REVU: Project Development Platform



Team Leader(s)
Chervelle Pierre

Team Member(s)
Arisa Laloo

Faculty Advisor
Marius Silaghi

Secondary Faculty Advisor
Philip Chan



REVU: Project Development Platform  File Download
Project Summary
REVU is a web-based project development and review platform designed to help instructors monitor student software repositories and evaluate both team progress and individual contributions. The system integrates Google OAuth for secure login, role-based access control for students, instructors, and administrators, and GitHub repository analysis to collect commit activity. REVU generates overall project reports as well as contributor-specific reports, giving instructors better visibility into collaboration, activity trends, and code development patterns. The platform improves transparency in team-based software projects and supports more efficient, data-informed evaluation.


Project Objective
The objective of REVU is to create a secure and user-friendly platform that allows students to register and submit project repositories while giving instructors tools to analyze repository activity. The system is intended to support contributor-level tracking, generate AI-assisted summaries and reports, and improve the fairness and efficiency of assessing team-based software development projects.

Manufacturing Design Methods
The system was developed using a modular, layered design approach to ensure scalability, maintainability, and clear separation of responsibilities. A three-tier architecture was implemented, consisting of a frontend interface, a backend API built with Flask, and a relational database for persistent storage. The backend handles core logic including authentication, data processing, and integration with external services such as the GitHub API for retrieving repository and commit data. An evaluation pipeline was designed to process this data through multiple stages, including compilation tracking, metric generation, and large language model (LLM) analysis for code evaluation. The database schema was structured to manage entities such as users, repositories, commits, and reports. This modular design allows individual components such as authentication, data retrieval, and reporting to be developed, tested, and extended independently.

Specification
The system is designed with a secure and structured workflow that supports both authentication and project evaluation. It utilizes Google OAuth for login, restricting access to approved academic email domains and requiring users to complete registration before account creation is finalized. Role-based access control is implemented to differentiate permissions for students, instructors, and administrators. Users can submit repositories and link contributors, while the system integrates with GitHub to retrieve commit histories and analyze development activity. All repository, commit, and contributor data is stored in a relational database to ensure consistency and accessibility. The platform generates both overall project reports and detailed per-contributor reports, which are accessible through an instructor dashboard designed for efficient review of project progress and individual contribution metrics.

Analysis
The project demonstrates that repository activity can be used to provide meaningful insight into both group and individual progress. By linking commits to registered contributors through GitHub usernames and email matching, REVU can distinguish team-wide activity from individual participation. The system architecture also supports future scalability by separating repository-level reporting from contributor-level reporting. Early testing showed that the platform can streamline instructor review workflows and improve visibility into collaboration patterns that would otherwise be difficult to assess manually.

Future Works
Future improvements include refining the AI scoring logic, adding more advanced visual analytics, facilitate testing the submissions and improving support for large team projects. Additional enhancements also include deeper integration with institutional systems, and deployment to a school-hosted production server with stronger monitoring and logging.


Acknowledgement
We would like to thank Dr. Marius Silaghi and Dr. Philip Chan for their guidance, feedback, and support throughout the development of this project. Their input helped shape both the technical direction and the practical goals of the platform.




Search and Rescue Coordinated Intelligence Systems



Team Leader(s)
Yavanni Ensley

Team Member(s)
Younghoon Cho, Yavanni Ensley, Jaylin Ollivierre

Faculty Advisor
Dr. Thomas Eskridge

Secondary Faculty Advisor
Dr. Chan



Search and Rescue Coordinated Intelligence Systems  File Download
Project Summary
SRCIS (Search and Rescue Coordinated Intelligence Systems) is a ROS2-based system designed to improve the effectiveness of search and rescue operations through human-robot collaboration. A key feature of the system is its continuous compositional control (CCC), which allows operators to provide guidance to the agents by remotely controlling them and then transitioning the control back to the agents. With humans and agents capable of changing their behaviors based on the environment, we can significantly reduce target search and tracking time beyond fully autonomous performance. In addition, SRCIS provides a scalable architecture that enables seamless integration of new robots with various roles. The system integrates heterogeneous platforms, including UGVs (Unmanned Ground Vehicles), Quadrupeds, and UAVs (Unmanned Aerial Vehicles), and can be further expanded by incorporating additional robotic platforms. Overall, SRCIS combines autonomy and human decision-making to improve efficiency in search-and-rescue operations.


Project Objective
A human-agent teamwork system designed to improve the effectiveness of search and rescue operations by merging human decision-making with robotic platforms through autonomous agents.

Manufacturing Design Methods
SRCIS was designed with a modular architecture that leverages publish/subscribe networks to enable effective communication among agents, human operators, and physical robots. High-level communication uses Hazelcast to facilitate communication with the ability to scale. Agents and human operators can directly command their physical shells through this layer, and the robots can provide information about the world (camera feeds, location, maps, etc.) back to the agents and humans. At the low level, all 3 robots leverage various ROS2 nodes to accomplish SLAM and target detection. To account for remote network communication, SRCIS uses a VPN (in our case, ZeroTier), which minimizes the need for complex routing and enables WebRTC’s peer-to-peer connections.



Future Works
The system can be expanded with specialized robots for scouting, tracking, and payload delivery. Computer vision can also be integrated for real-time target detection and recognition, while the human control interface can be refined for more intuitive control. Additionally, autonomous decision-making can be developed to enable robots to adapt to dynamic environments. Furthermore, cellular communication (e.g., LTE) can be utilized to support operation over wider and more distributed environments.






SLAIT




Team Member(s)
Maria Linkins-Nielsen, Michael Bratcher

Faculty Advisor
Marius Silaghi

Secondary Faculty Advisor
Philip Chan



SLAIT  File Download
Project Summary
The Secure Language Assembly Inspector Tool (SLAIT) is a web-based educational platform designed to help students safely explore and understand assembly-level program execution without requiring local debugger setup or reduced system security settings. SLAIT allows users to upload assembly programs through a browser, select specific instruction lines for inspection, and observe how registers and processor flags change step-by-step during execution. Programs run inside an isolated Docker sandbox environment, ensuring secure execution while protecting the user’s system. The platform then returns structured register and flag data for visualization through an interactive interface hosted on an application server. SLAIT supports instruction in computer architecture and assembly programming by lowering technical setup barriers and enabling safe experimentation with low-level code behavior. This makes it especially useful for students learning debugging concepts, instruction-level execution flow, and processor state changes in a controlled environment.


Project Objective
The objective of the Secure Language Assembly Inspector Tool (SLAIT) is to develop a secure, browser-based platform that enables users to upload assembly programs, select instruction inspection points, and visualize register and flag state changes during execution without requiring local debugger installation. The platform improves accessibility to instruction-level debugging while maintaining system security through sandboxed execution.

Manufacturing Design Methods
SLAIT was implemented as a distributed web application using a frontend interface connected to a backend execution environment hosted on an application server. Assembly programs are uploaded through the browser and executed inside isolated Docker containers to ensure safe runtime behavior. The backend assembles and executes programs using NASM and GDB, captures register and flag values at selected breakpoints, and returns structured execution data to the frontend for visualization. The frontend provides a step-level execution interface that allows users to observe processor state transitions across program instructions.

Specification
Key system specifications include: - Web-based assembly program upload interface - User-selected breakpoint inspection support - Register and processor flag state capture - Step-level execution visualization - Docker-based sandbox execution environment - Backend automation using NASM and GDB - JSON-formatted execution state responses - Platform-independent browser accessibility

Analysis
Testing confirmed successful communication between the frontend interface and backend execution environment. Assembly programs were assembled, executed inside containers, and register/flag states were captured at selected instruction breakpoints. The sandboxed execution workflow ensured safe runtime isolation while maintaining accurate processor state inspection. The system demonstrates that instruction-level debugging can be performed remotely without requiring local debugging environments or elevated permissions. Initial evaluation indicates that the platform effectively supports instructional use in computer architecture and assembly programming courses by simplifying access to low-level execution visibility.

Future Works
Future development of SLAIT will include expanding support for additional assembly syntaxes such as MASM, improving the platform’s adaptability across instructional environments that use different assembler formats. Planned enhancements also include extending register inspection capabilities to support full 64-bit register tracking. Additional work will focus on completing structured user testing and usability evaluation to measure how effectively students interpret register and flag state changes using the visualization interface. Feedback from these evaluations will guide improvements to the frontend visualization workflow and overall instructional usability of the platform.

Other Information
Want to use our tool? Make sure you are connected to the 鶹ýӳ Wi-Fi and visit http://172.16.135.14/

Acknowledgement
We would like to thank our advisors, Dr. Chan and Dr. Silaghi for guiding us through this process and supporting our project development.




SpacePNP



Team Leader(s)
Khurram Valiyev

Team Member(s)
Khurram Valiyev, Sam Warner, Jabari Sterling, Samuel Kaguima

Faculty Advisor
Dr. James Brenner

Secondary Faculty Advisor
Dr. Philip Chan



SpacePNP  File Download
Project Summary
The goal of SpacePNP is to streamline the electronic component procurement process by replacing slow, complex interfaces with a high-efficiency UI. Currently, users face significant delays and errors due to the manual effort required to navigate traditional distributor websites and verify part compatibility. SpacePNP addresses these pain points by integrating an automated compatibility checker that ensures parts meet project specifications before purchase. By reducing search time and mitigating the risk of ordering incorrect components, the system enables a more accurate and productive engineering workflow.


Project Objective
Our project aims to streamline the way electronic components are selected and integrated. It focuses on automated validation by replacing manual datasheet cross-referencing with logic-based compatibility checks. It also simplifies sourcing by reducing search fatigue through an intuitive interface and more precise search capabilities. To improve design accuracy, the project enables 3D visualization of component fit, helping prevent mechanical assembly errors. In addition, it fosters collaboration by creating a centralized knowledge base that supports community-driven design feedback.

Manufacturing Design Methods
SpacePnP is developed using a modular full-stack architecture based on Node.js, Express.js, and MongoDB for structured data management. A custom API client integrates supplier data using authenticated requests and rate-limiting to ensure reliable performance. The platform includes an interactive schematic editor built on an HTML5 Canvas, enabling users to visually configure components. Design accuracy is supported through grid-snapping, collision detection, and state management features. Security is implemented through authentication, encrypted data handling, and protected API communication, while system reliability is maintained through automated testing and validation processes.

Specification
The system shall provide an intuitive, e-commerce-style user interface that enables users to browse and search for components with ease. It shall support efficient search and filtering through a specialized fuzzy search algorithm with caching to deliver accurate and responsive results. The platform shall include a schematic-based tool that allows users to visually configure components and validate their compatibility within a design. Users shall be able to create and manage project-specific component lists, with options to export or share them publicly on the platform. Additionally, the system shall support community collaboration through an interactive forum where users can discuss components, share projects, and exchange design ideas and recommendations.

Analysis
The system was evaluated based on speed, accuracy, reliability, and collaborative tool usage. Overall, it performed well across all metrics. For speed, most server responses completed within the 2-second target, indicating efficient backend performance. Accuracy was consistently high, with system outputs reliably reflecting user inputs across core features such as search, account creation, and the component builder, as confirmed through testing and bug reports. Reliability was stable, with the platform successfully completing tasks in about 9 out of 10 cases during testing. Finally, collaborative features showed active engagement, with users creating projects, using the component builder, and participating in forum discussions, demonstrating that these tools are being effectively utilized.

Future Works
Future development of the platform could focus on improving the accuracy and depth of component compatibility checking by incorporating more advanced electrical modeling and manufacturer-specific constraints. The search system could also be enhanced with smarter recommendations and AI-driven suggestions based on user behavior and project history. The schematic and 3D visualization tools could be expanded to support more complex multi-layer circuit designs and real-time simulation of electrical performance. Collaboration features could be further developed by adding version control for projects and integrating real-time co-editing between users.

Other Information
https://sites.google.com/my.fit.edu/spacepnp/home

Acknowledgement
This project would not have been possible without the guidance of Dr. James Brenner and Dr. Philip Chan.




Student Code Online Review and Evaluation 2.0




Team Member(s)
Dorothy Ammons, Shamik Bera, Rak Alsharif, Patrick Kelly

Faculty Advisor
Raghuveer Mohan

Secondary Faculty Advisor
Philip Chan



Student Code Online Review and Evaluation 2.0  File Download
Project Summary
Student Code Online Review and Evaluation (2.0) or S.C.O.R.E (2.0) is a web application for creating and submitting programming assignments. Professors are able to create classes, assignments, rubrics, test cases and rosters. Additionally, they may view grades, submissions, AI usage scores and similarity scores. Students may use the application to view and make submissions to their assignments, receiving automatic grades and feedback through their submission output.


Project Objective
The objective of this project is to streamline the submissions process for programming assignments. By allowing students to achieve real time feedback on their programming assignments, we can help them improve their submissions and manage their grade expectations. Alternatively, we offer professors a platform full of customizable options to create their assignments. We want professors to automatically receive their students' grades in the way that they want those assignments to be graded.

Manufacturing Design Methods
S.C.O.R.E (2.0) follows a client-server architecture. It uses a React framework for the frontend, a Flask backend, and Firebase for the cloud database. Authentication is handled though Google OAuth.



Future Works
This project holds great potential for future improvements. We would love to see improved visuals for the integrity systems in our project. This could include clustering for similarity scores or returning exact lines of code that are being detected as AI created. Additionally, moving the server to a hosted platform, perhaps offered by 鶹ýӳ, would eliminate the need for user side executables. This would allow students to simply head to the webpage to use the web application.






Visualization For Formal Languages



Team Leader(s)
Chris Pinto-Font

Team Member(s)
Chris Pinto-Font, Andrew Bastien, Vincent Borrelli, Keegan McNear

Faculty Advisor
Dr. David Luginbul

Secondary Faculty Advisor
Dr. Philip Chan



Visualization For Formal Languages  File Download
Project Summary
Visualization for Formal Languages is an interactive Deterministic Finite Automata (DFA) canvas that allows users to see how a DFA is constructed, helping them better understand parsing and computation. The program also includes visualization of a Nondeterministic Finite Automata (NFA), which can be converted into a DFA. The program serves as a robust computational engine for educational purposes, helping any students who need extra help understanding the ins and outs of a DFA. It can also be used by a professor to show students how to create and understand. This is done by allowing users to draw state graphs, evaluate string or regular expressions, and observe real-time animations. The seamless integration of complex mathematical logic into the custom-built, responsive graphical interface transforms abstract formal language theory into simple, readable ideas on a workspace we call the canvas. We also have a specific teaching mode to help students better grasp the concept.


Project Objective
The primary objective of our project is to bridge the gap between theoretical computer science concepts and applied computational logic by providing a visual learning environment. Instead of static diagrams, being able to follow on a dynamic canvas engages users to reinforce their understanding of finite automata.

Manufacturing Design Methods
The program was built on a clean, modular architecture to keep the system separate from the algorithmic logic in the backend and the user interface in the frontend, ensuring scalability and ease of maintenance. Instead of relying on static visuals, our program uses the Tkinter Canvas, which supports dynamic animation, drag-and-drop, and smooth transition lines.

Specification
Python 3 is the only programming language used in the program, allowing for a clean, readable codebase. We also used the native Tkinter library for the graphical interface to maintain lightweight, simple execution and maximum compatibility.

Analysis
The visualizer successfully achieves its purpose through efficient processes for complex graphical logic, including non-determinism and lambda transitions. All without reducing graphical performance. By ensuring an interactive teaching mode rather than automatic solutions, users can be tested on their knowledge, demonstrating the growth of their understanding of abstract concepts.

Future Works
Future development will be focused on expanding the program's logic, including Pushdown Automata and Turing Machines. As well as upgrading the graphical user interface to handle dense graphs and improve the layout of transition lines.

Other Information
You can use the program from a direct download from our website: https://kmcnear2022.github.io/

Acknowledgement
We achieved our results in the program thanks to our faculty advisor, who helped us gain a better grasp of formal languages. We also appreciate the feedback and support from our users.




Wallee.



Team Leader(s)
Emma Bahr

Team Member(s)
Emma bahr, Kyle Gibson, Joshua Cajuste, and Matteo Caruso

Faculty Advisor
Dr. Siddartha Bhattacharyya

Secondary Faculty Advisor
Dr. Phillip Chan



Wallee.  File Download
Project Summary
We developed a mobile cross-platform personal finance application called Wallee to help users better understand and manage their money in one place. The app connects securely to users’ bank accounts through the banking API through Plaid to automatically import and update transactions in real time. Users can view an overview of their finances on a home dashboard, explore spending breakdowns and trends, and track recent transactions categorized automatically. The application also includes a dynamic budgeting system that adjusts based on spending behavior, along with savings goals that allow users to set targets and monitor their progress over time. In addition, Wallee includes an AI-powered chat assistant (Wallo) that provides personalized financial insights, answers questions about spending patterns, and helps users make informed budgeting decisions. The goal of the project is to simplify personal financial management and give users clearer, more actionable insight into their financial health.


Project Objective
The objective of Wallee is to address the limitations of existing personal finance tools by providing an adaptive, intelligent, and user-centered financial management system specifically designed for variable-income users. The application aims to replace static budgeting models with an automated, paycheck-aware system that continuously recalibrates budgets based on real-time income changes. It also seeks to improve the reliability of financial guidance by using a two-layer AI architecture that combines generative AI with verified financial logic to ensure that all recommendations are consistent with the user’s actual financial data. In addition, Wallee introduces a dynamic financial health scoring system to encourage better financial habits through continuous feedback and engagement. Finally, the project focuses on delivering a clean, cognitively accessible interface that transforms complex financial data into clear, actionable insights, ultimately bridging the gap between raw transaction data and meaningful financial decision-making.

Manufacturing Design Methods
The manufacturing and design methods for Wallee follow an iterative, user-centered development approach focused on modular architecture, secure data handling, and scalable system integration. The system is designed using a layered architecture consisting of a Flutter-based cross-platform frontend, a Node.js (NestJS) backend for core application logic, and in app advanced analytics and financial computation. Secure financial data integration is achieved through the Plaid API, which enables real-time transaction syncing via webhooks and structured ingestion into a Supabase database. On the design side, the application follows a component-driven UI approach in Flutter, emphasizing clarity, minimal cognitive load, and accessibility for variable-income users. Key features such as the dashboard, budgeting system, goals tracker, and transaction views are developed as reusable modules to ensure consistency and maintainability. The financial logic layer is separated from the UI to ensure that budgeting recalculations, income detection, and health scoring remain accurate and independently testable. The AI system is implemented as a two-layer model: Wallee Zero, which handles deterministic financial calculations and rule-based validation, and Wallo, which serves as the user-facing conversational interface that retrieves only verified insights from the underlying logic layer. Development follows an agile workflow with continuous testing and refinement based on usability feedback, ensuring that both financial accuracy and user experience are maintained throughout the system.



Future Works
Future work for Wallee will focus on expanding predictive and automation capabilities to make the system more proactive and personalized for users. This includes adding forecasting features that estimate future income, spending, and savings based on historical financial patterns, as well as improving the AI system to support more advanced financial guidance such as tax estimation, debt management strategies, and long-term planning while maintaining verified, logic-based outputs. Additional improvements include smarter automation for detecting recurring bills, managing subscriptions, and refining transaction categorization over time through adaptive learning. The platform could also be expanded to support more financial institutions beyond the current integration, along with enhanced gamification of financial health scoring to increase user engagement. Finally, future development will focus on improving scalability, performance, and customization options to support a growing user base and provide a more tailored financial management experience.


Acknowledgement
We would like to acknowledge the support and contributions of everyone who helped make the Wallee project possible. We extend our gratitude to our faculty advisor for their guidance, feedback, and encouragement throughout the development process, as well as for providing valuable insight into system design and implementation. We also thank the developers and maintainers of the technologies used in this project, including Flutter, Node.js, in-app calculations, Supa BaseSQL, and the Plaid banking API, which were essential in building a secure and scalable financial platform. Finally, we appreciate the support from peers and reviewers who provided feedback during testing and helped improve the usability, functionality, and overall design of the application.