Fiorentini MS Projects
Lisa Fiorentini MS Laboratory Navigation Bar
MS Projects
Working on a Project
How to Join a Project
Please use the tabs below (under "Ongoing Projects" and "Past Projects") to learn about some of the MS projects I am currently, and I have in the past, worked on with my MS students. If you are interested in any of these projects, please contact me to see if your skill set is a good match for the project and how you can join the team.
Although I encourage students to join a team at the beginning of a semester, I allow students to do so up to the middle of the semester:
-
mid October for the Fall semester; and
-
end of February for the Spring Semester.
This is done to accommodate students that might be dropping a course at the last minute and might suddenly have some extra time on their hands.
How Projects are Run
First Meeting(s)
During our first couple of meetings the team will sit down to brainstorm and agree on the scope of the project, the main requirements, deadlines and roles and responsibilities of each team member. Projects are mainly run as a learning experience for the students, so the scope of each project can be tweaked to better reflect the research interest of the different team members.
Individual Studies Credits (ECE 6193)
During our first meeting we will also decide how many credits of Individual Studies (ECE 6193) each student can enroll for based on the work they are signing up for. Just as a general guideline, a student is expected to work on his project for 4-5 hours a week for each credit of ECE 6193 he/she is signed up for. Please do not ask me to sign up for ECE 6193 credits until we have had our first team meeting. To sign up for ECE 6193 credits please follow these instructions.
Project Proposal
After the first couple of meetings, each team will be asked to turn in a formal Project Proposal that highlights the work that will be done throughout the semester. In particular, the proposal will have to include:
- Scope statement;
- Work Breakdown Structure (WBS);
- Requirement traceability matrix;
- Gantt chart;
- Budget; and
- Responsibility Assignment Matrix (RAM).
Weekly Meetings
After the formal Project Proposal has been submitted, the team will receive my feedback within a couple of days and will officially start to work on the project following the (revised) proposal. Each team will meet with me once a week to discuss the progress and possible issues that might arise. Team members can meet as many times as needed every week. All students working on the project will be given access to the DL 808 MS Lab. Please follow the guidelines highlighted below when using the Lab.
The team is expected to follow the Timeline presented in the Project Proposal unless serious unexpected issues arise. Excuses like "I didn't do much since last week because I had to study for a midterm" will not be considered acceptable".
Final Report
At the end of each semester, each team/student is asked to submit a Final Report that summarizes the work that was done throughout the semester. The report will need to include enough info/details so that a new student that wants to join the team can be brought up to speed quickly just by reading the report.
I suggest all students to keep a “diary” of the work done throughout the semester not to forget procedures, details and lessons learned.
The Final Report can be used by the students as a starting point for their MS Final Report.
Final Grade
Individual Study (ECE 6193) credits are non-letter graded credits. Specifically, a student is assigned a Satisfactory (S) grade if he/she:
- shows interest and participation in the project;
- shows they are capable of independent thinking and problem solving;
- shows constant progress throughout the semester;
- shows the ability to interact and collaborate with other peers;
- submits an acceptable Final Report.
An Unsatisfactory (U) grade will be assigned otherwise.
Access to the lab is restricted to MS students who are currently working on a project under my supervision. Students who desire to get a hands-on experience should schedule a meeting with me to discuss projects ideas. Students can either join an existing project or propose a new project.
If you are already working on a project with me, please send me a request via email (include your BuckID number) to be granted access to the Lab.
Using the MS Students Lab
Students that are granted access to the MS Laboratory can use the Lab when they are working on their MS project and for meetings related to their MS project. Note that students:
- should not grant access to the lab to friends or people they don’t know;
- should NOT go to the lab to:
- do homework or study for other courses;
- meet with other students for projects not related to their MS Project.
- If a student is the last person to leave the lab, they should turn off all the lights and make sure the door is locked after exiting the Lab.
Keep the Lab Clean and Organized
Please:
- Do not bring any food or drinks to the Lab;
- Do not take any material/equipment home unless you have been authorized by Prof. Fiorentini;
- Clean-up the working stations after you are done using them;
- Do not leave personal material in the Lab (books, notebooks,..), unless it is material that you are sharing with some other teammates;
- Put chairs back if you move them around;
- Take all your trash out when you leave the Lab;
- Be considerate to other students who are currently working in the lab on other projects.
Ongoing Projects
In Fall 2021 Ohio State joined nine other universities in the AutoDrive Challenge™ II Competition! The teams are taking on the challenge of developing and demonstrating an autonomous vehicle (AV) that can navigate urban driving courses as described by SAE J3016™ Standard Level 4 automation.
The Buckeye AutoDrive team is comprised of undergraduate, masters and PhD students from Mechanical and Aerospace (MAE), Electrical and Computer (ECE), Computer Science (CSE) and Industrial Systems Engineering (ISE) and is advised by Research Associate Professor Qadeer Ahmed (MAE), Assistant Professor Harry Chao (CSE) and myself (ECE). The AutoDrive Challenge will develop the next generation of autonomous vehicle engineering professionals through a keen focus on:
- Sensing and perception technologies
- Simulation
- Integration of AV compute platform
- Software development
- Deep learning
- Artificial intelligence
- Sensor fusion
- Navigation and mission planning
- Autonomous vehicle controls
For more information please check our official website.
This project is sponsored by the Transportation Research Center (TRC Inc.). The goal of the project is to implement radar sensor detection and tracking functionality into an automated vehicle platform. The project will run from Fall 2022 to Spring 2024. More details will be made available soon.
This project is sponsored by SpaceWERX through the Orbital Prime program. The project involves developing a control system for a CubeSat system that can be used for space debris removal. American Citizenship is required to work on the project.
The aim of this project is demonstrating the potential uses of a robotic hand. The robotic hand system can be used for multiple purposes, for instance:
- The robotic hand can be programmed to mimic the gesture performed by a user wearing a special glove equipped with sensors.
- The robotic hand can be used to display sign language signs.
This project is built using Inmoov’s right hand design, servos, flex sensors, and Arduino. The Arduino is programmed to receive input from the flex sensors and output a position to the servo motors. The flex sensors are mounted on a glove to allow control by a human hand. The computer input is done through Arduino’s Serial Monitor. From these simple tools, the hand can sign any word or phrase by spelling it out letter by letter. The hand can be used to perform pre-programmed words or used in real time to allow remote use by an interpreter.
The goal of the OSU F1/10 Team is to build a vehicle capable of racing autonomously in an unknown track using a set of sensors. The name F1/10 rises from the fact that the vehicle is a 1/10 of the size of an F1 race car [1]-[3]. This project allows students to work on very different topics:
- Simultaneous Localization And Mapping (SLAM), sensor fusion and real-time data acquisition. Initially the vehicle was only equipped with a LIDAR, now IMU and a 3D camera are being integrated;
- Path planning and high level control: the goal is for the vehicle to complete a course as fast as possible without hitting walls/obstacles;
- Low level control for speed and steering control;
- ROS: the control board is programmed using ROS;
- System simulations in Gazebo environment.
Preliminary Results
A team of MS students developed a first prototype of the vehicle that can race autonomously using a LIDAR sensor. The vehicle transmits data remotely using a 802.11g/n access point so that it is possible to visualize performance data remotely as the vehicle is racing (e.g. data from the LIDAR).
Vehicle Testing in the Basement of Dreese Lab - Dec 2019
Vehicle Testing in the Basement of Dreese Lab - Dec 2018
Current and Future Work
A first prototype of the vehicle has been developed, but more work can be done to improve the performance of the vehicle. In particular, more sensors could be included (e.g., an IMU and 3D camera), and the path planning and high level control algorithms could be improved.
If you are interested in:
-
Sensor fusion and real-time data acquisition;
-
Simultaneous Localization And Mapping;
-
Path planning and high level control;
-
Coding in ROS; and/or
-
System simulations in Gazebo environment
please contact me to join the team
Past Projects
This is an Industry sponsored project. More details will be shared with the students working on the project. Some laser sensors need to be used to collect measurements in a production chain and detect possible offsets between parts of the product. The laser sensors need to be selected, integrated, data needs to be collected and stored in a database (back end) and specific variables of interest need to be shown in a user interface (front end).
This is an Industry sponsored project. The purpose of this project is to integrate several sensors (thermistors) and actuators (heating films) using a microcontroller, and then program the microcontroller to achieve temperature regulation inside a storage cabinet. Hourly paid position available for this project.
Required skills: familiarity with sensor integration and microcontroller programming. The microcontroller will be selected as part of the project tasks.
The purpose of the project is to use a camera, connected to a Raspberry Pi, to recognize different QR codes. The QR codes will be used to pinpoint different intersections of a miniature smart city (BuckeyeVille). Detecting the QR codes will allow the miniature autonomous vehiclee to navigate the smart city.
Students can use any image processing and machine learning algorithm of interest to recognize the QR codes.
The scope of the project is to build a mini smart city, BuckeyeVille, that resembles the OSU campus. In this smart city, miniature autonomous vehicles will be able to move around mimicking self-driving cars. Each miniature autonomous vehicle will be assigned a different task (e.g. going to the coffee shop, then to the library and finally to the gym) that will have to be completed while obeying to traffic rules.
Phase-1 of the Project
For the sake of simplicity, in Phase-1 of the project only intersections with stop signs (no traffic lights or roundabouts) and only two-lane streets (one for each traffic direction) will be considered. The vehicles will use a front camera and two ultrasonic sensors to drive on the right side of the street and to detect incoming traffic at a stop sign. The autonomous vehicles will not have a map of the city, but will be connected to a navigation system that will provide them with their current position and the next waypoint(s), like in a real self-driving scenario. The navigation system will not provide vehicles with info about lane marks, traffic signs and/or other vehicles, these will have to be detected autonomously by each vehicle. Because of the unacceptable accuracy level of GPS systems in indoor, especially scaled-down, systems, a virtual GPS system based on OptiTrack will be utilized to detect the position of the vehicles. Once the different tasks for each autonomous vehicle have been defined, the navigation system will provide real-time each car with their current position, obtained from the virtual GPS system, and the next waypoint(s) that have to be followed to get to the next target.
Future-Phases of the Project
The project has been temporarily put on pause because of the AutoDrive challenge competition, but those could be additional features to be considered in the future
- Autonomous vehicles will be equipped with infrared cameras for night vision;
- The tasks for the autonomous vehicles will be selected using a keypad placed on the autonomous vehicle;
- The autonomous vehicles will be able to recognize moving obstacles from static obstacles and treat them properly; e.g. autonomous vehicles might try to pass static obstacles using the passing lane;
- Pedestrians will be added to the city: small robots mimicking pedestrian behavior will be considered. They will be programmed to cross pedestrian crossings at random times. The autonomous vehicles will have to stop at a safe distance as soon as they see a pedestrian approaching the pedestrian crossing.
- The OptiTrack system will be replaced with an equivalent system completely designed by the students;
- Traffic lights, bridges and roundabouts will be added to the infrastructure;
- Emergency vehicles will be considered: emergency vehicles will have sirens; regular vehicles will have to stop when a siren is on and a different control strategy will have to be developed for the emergency vehicles to navigate traffic.
- Dynamic routing will be implemented to consider scenarios where an accident might occur; cars will be re-routed and will receive the info from the navigation system;
- V2V and V2I scenarios will be considered to increase security (e.g., look for stopped traffic) and to minimize traffic and traveling time for each car;
- Parking assignments will be included, for which sensors in the back of the vehicle will be needed;
- Not pre-assigned missions will be considered, e.g. a vehicle drives around until it receives the destination point (e.g. Uber).
Students will have to:
- build the city by selecting road configuration, signs, traffic lights, point of interest, etc. An example of platform already available at OSU is shown in the figure above;
- design the robots (scaled version of a passenger car) by selecting all the components, including a suitable sensor suite;
- design the control logic that will allow the robots to complete some tasks while obeying to traffic rules;
- select and design the communication system that will allow the robots to communicate with the navigation system;
If you are interested in:
- embedded systems;
- communication;
- control systems; and
- image processing and machine learning
please contact me to join the project.
The project focuses on the development of a UAV platform-based multi-sensor system for early detection and monitoring of powdery and downy mildew in cucurbit crops.
Cucurbit Downy Mildew (DM) and Powdery Mildew (PM) are some of the most important diseases of cucurbits worldwide, causing severe reductions in yield and loss of fruit quality. In addition to employing host plant resistance, fungicide applications are used for crop protection, and initiated when those diseases have been detected in a field or neighboring county or state. Due to the virulence of these pathogens, scouting and treatment is essential to reduce marketable losses. Traditional scouting requires walking fields and manually inspecting plants for symptoms and signs of infection. However, this approach is very labor intensive, particularly on large-scale farms, and relies on the experience of the scout to be able to recognize signs of those diseases.
The goal of this project is to design a UAV platform that uses a sensor array to detect and pinpoint signs of DM/PM diseases on cucumber and pumpkin crops. The use of this technology will have several advantages. First, different sensors (RGB, IR, Multi-Spectral, etc.) will allow for early disease detection, possibly even before they are noticeable to human eyes. Secondly, the UAV will require minimal human supervision, and be able to scout crops more frequently and thoroughly than before. Lastly, after initial disease detection, the UAV will remain useful by monitoring crop health and helping to evaluate fungicide efficacy and optimize sprayer operation and coverage. The proposed technology could also be adapted in the future to identify and quantify damage caused by diseases, insects, and weeds on different crops.
Cucumber and Pumpkin crops have been made available for the project at the OSU Waterman Farm and at the OSU Western Agricultural Research Station. MS students have collected multispectral images of the crops under healthy and unhealthy conditions (plants affected by PM and DM). These images are being used to derive classifiers capable of detecting the diseases.
Preliminary Results
Students are working on extracting significant features from the immages that classifiers can use to cathegorize the status of the crop (healthy/unhealthy). In particular, for feature extraction, Local Binary Pattern (LBP), Gray-level Co-occurrence Matrix(GLCM) are some of the methods under investigation. On the other hand, K-nearest neighbors (KNN), Logistical Regression, Support Vector Machine(SVM), Neural Networks and Random Forest algorithms are being used as classifiers. Using the set of images previously described (half for training the algorithms and half for validation) a 90% accuracy was achieved in the crop healthy detection.
Current and Future Work
So far, only few features and hyperspectral bands have been used to build the classifiers, so students are working on improving the methodlogy by considering more bands, combination of bands (e.g. vegetation indeces) and applying more methods for feature extraction.
If you are interested in
- feature extraction;
- object classification;
- deep learning and machine learning;
- hyperspectral imaging;
please contact me to join the team.
Lisa Fiorentini, Dept. of Electrical and Computer Engineering, OSU
Wladimiro Villarroel, Dept. of Electrical and Computer Engineering, OSU
James Jasinski, Dept. of Extension, OSU
John Fulton, Dept. of Food, Agricultural, and Biological Engineering, OSU
Sally Miller, Dept. of Plant Pathology, OSU
The purpose of this project is to develop an efficient and reliable method to detect Massasaugas, federally threatened rattlesnakes found in Ohio, using multispectral imaging. Over the summer, a large set of multispectral images of the rattlesnakes and different habitats were collected in collaboration with the Ohio Biodiversity Conservation Partnership using a Red Edge Camera.
Students will have to build a classifier, and train it with part of the multispectral images collected, to detect the presence of a Massasauga rattlesnake.
If you are interested in
- feature extraction;
- object classification;
- deep learning and machine learning;
- multispectral imaging;
please contact me to join the team.
Project done in collaboration with:
Gregory Lipps
Amphibian & Reptile Conservation Coordinator
Ohio Biodiversity Conservation Partnership
Ohio State University
Redesign of the central processing unit for the Seaology® pCO2 system.
The system history:
In 2004, the National Oceanic and Atmospheric Administration (NOAA) Climate Program Office (CPO) initiated a program to evaluate air-sea carbon dioxide (CO2) fluxes through high-resolution, time-series measurements of atmospheric boundary layers and surface ocean CO2 partial pressure (pCO2). To meet this new requirement NOAA’s Pacific Marine Environmental Lab (PMEL) needed a cost effective, rugged, accurate CO2 sensor that was easier and cheaper to deploy and maintain than existing sensor technology. In 2009, PMEL partnered with Battelle to manufacture a sensor that collects CO2 data from surface seawater and marine boundary air. Battelle began producing the pCO2 sensor system as part of our Seaology® platform [1]. Today, the commercially available sensor collects data every three hours for 12 to 18 months before requiring service. It transmits daily summary files of its measurements back to clients to be examined, analyzed, and posted to the web in near-real time for review by the global scientific community.
Project:
The Seaology® pCO2 system was designed around a control unit that was already in use for several projects at the time. Since that time the technology has become obsolete and there is a desire to update the central processing unit (CPU) for the Seaology® pCO2 system.
Students working on the project are redesigning the Seaology® pCO2 CPU circuit card assembly (CCA) maintaining the current CPU CCAs functionality and interfaces. Firmware is being written for the CPU CCA for it to maintain the same function as the current system. Consideration for manufacturing are also taken into consideration and any designs will go through a design review to confirm this.
Project Sponsored by:
Light-emitting diodes (LEDs) are used every day more in airfield lighting applications due to their potential benefits to pilots and airport operators. One major benefit is the potential to provide more reliable operations and reduce maintenance costs through longer useful life than incandescent lamps. LEDs could allow to save millions of dollars and the cost of change could be returned in only few years. However, airfield LED luminaires are relatively new and no sufficient long-term performance test results are currently available to validate their long-life potential.
The goal of the project is to gain an understanding of LED lighting performance under various ambient temperature conditions (with a focus on summer hot temperatures and winter cold temperatures) for LED fixtures embedded in various pavement types in an airport-like environment. The results of the project will be used to better estimate light output depreciation.
Preliminary Results
A test field was developed at The Ohio State University Airport where LED fixtures were installed on two different kinds of slabs: asphalt and PCC concrete. ECE students were in charge of developing a data acquisition system to collect temperature data from the LED fixtures, which were equipped with thermocouples. The picture below shows the architecture of the data acquisition system. The data from the LED fixtures is collected through a sophisticated network made of Master and Slave nodes and sent to a Master Hub which is located inside the hangar and connected to a laptop. A python scripts runs continuously in the laptop to retrieve the data from the Master Hub and upload/store it in a MySQL database. Data is shown real-time through a webpage. Android and iOS apps were also developed.
Project done in collaboration with:
FAA Technical Point of Contact (POC):
Don Gallagher
Ohio State University POCs:
Seth B. Young, Ph.D., AAE, CFI
Lisa Fiorentini, Ph.D.
Wladimiro Villarroel, Ph.D.
Iowa State University POC:
Halil Ceylan, Ph.D.
Texas A&M University POC:
Yunlong Zhang, Ph.D.
Project sponsored by:
Partnership to Enhance General Aviation Safety, Accessibility, and Sustainability (PEGASAS)
Federal Aviation Administration (FAA)
Center of Excellence (COE) for General Aviation
Autonomous vehicles are transforming communication, transportation, and warfare technologies. In particular, small drones and unmanned aerial vehicles (UAVs) are becoming popular devices within reach of the public. Drones are increasingly affordable and easy to customize, mainly due to the availability of open source programming environments for guidance and control.
During the last years a platform called Pixhawk has appeared to be really spendable for drones of medium size. The Pixhawk is an independent open-hardware project that aims to provide the standard for readily-available, high-quality and low-cost autopilot hardware designs for the academic, hobby and developer communities. This hardware is really light even if equipped with a complete set of sensors including GPS, accelerometers, gyros and pitot tube.
The MathWorks team has recently developed a Matlab support package called "Pilot Support Package" (PSP) through which it is possible to install autopilots designed in Simulink into the Pixhawk board. Therefore, using the advantages of the PSP it is possible to design flight controllers, perform hardware in the loop simulation (HIL-s) and flight tests on real drones, making the implementation process faster than ever. In addition to that, the Pixhawk is configured with the open-source software QGroundControl, that provides full flight control and mission planning for any MAVLink enabled drone. Using this software is possible to select which flight controller to install in the Pixhawk and then plan the mission via waypoints specification.
For testing the controller design and the correct implementation is also possible to connect Matlab with the Xplane-10 flight simulator to perform HIL-s.
At the University of California Riverside (UCR), the UCR SkyTeam led by Ph.D student Raffaele Baggi, is already working on implementing controllers using the Pixhawk and the afore-mentioned software on a real drone.
Objectives
The objectives of the project are to learn how to implement controllers into the Pixhawk, how to use QGroundControl as a mission planner on the custom flight controller and analyze the performances of the controllers on a real drone.
Activities
The main activities of the project can be summarized as follow:
• Design a flight controller in Simulink.
• Build the flight controller using the Matlab PSP.
• Install the flight controller in the Pixhawk.
• Demonstrate that the actuators act in accordance with the Simulink scheme.
• Perform hardware in the loop simulations to validate the controller.
• Analyze the performances of the autopilot.
Expected Outcomes
The expected outcome for the project is the creation of a pipeline for design, implementation and test of feedback control algorithms for drones.
Tools provided
The tools that will be provided for this project are:
• Pixhawk cube with a complete set of sensors, Fig. d);
• Freewing Eurofighter Typhoon v2, Fig. a;
• Matlab Pilot support package plus manual;
• Qgroundcontrol software, Fig. b; and
• Xplane 10 flight simulator, Fig. c.
Project done in collaboration with:
Prof. Andrea Serrani , OSU
Prof. Elisa Franco, UCLA
If you are interested in working on multidisciplinary projects that will allow you to interact with Electrical, Mechanical, Biological and Integrated Systems Engineers, please consider joining one of the projects listed below.
These projects run for two semesters, Fall and Spring, and you can participate by enrolling in ENGR 5901.01 (Fall Semester) and ENGR 5902.01 (Spring Semester). Note that to receive credits for this project you need to work with the team both semesters. Please contact me if you are interested in any of these projects.
Automated Visual Inspection
Honda Manufacturing is trying to maintain the quality and integrity of the mold’s core pins but it is becoming a big challenge. Early identification of bent and broken core pins, in addition to, surface defects can affect the costs for block scrap and demand rework. The object is to use a charge couple device (CCD) camera to develop a means to automate the inspection of parts to assure good quality and meets Honda’s standards.
Sponsored by: Honda of America Manufacturing
NSX Paint Fixture
The Performance Manufacturing Center is where the Acura’s NSX is engineered and produced. Currently, painting fixtures have been in service for over 5 years. Continuation of them will potentially lead to catastrophic failures. The object is to redesign and improve the painting process and to reduce costs/materials associated with the current fixture. The team will have an opportunity to visit the center to understand the current painting process and then build and potentially test a prototype model.
Sponsored by: Honda Performance Manufacturing Center
Occupant Wellbeing Phase II
This project is a continuation project from the 2017-18 MDC project. The objective is to investigate technologies/strategies to improveoccupant anxiety whereas the previous team’s objective was to detectanxiety. The team will research strategies to identify occupant anxiety and how to reduce it. A prototype from the previous team will be enhanced to include this new model of improving the occupant’s anxiety. The team has the opportunity to test their prototype at the driving simulator.
Sponsored by: Honda R&D Americas, Inc.
Solar Power
NextEra Energy is the world’s largest utility company with a market capitalization of more than $62 billion dollars. The project is to develop a computer-based (or phone-based) tool to allow for more accurate solar PV facility performance determination and forecasting. The tool would use existing data from a solar facility to include generation and irradiance and external data from a local radar or satellite. Through engineering principals and spatial atmospheric modeling, the tool would develop a refined value for expected actual performance and through modeled temporal atmospheric variations, and provide short-term forecasts of the solar facility’s performance.
Sponsored by: Next Era
This project aimed at developing a swarm of Unmanned Ground Vehicles (UGVs) capable of performing a mission. The project involved several components that went from control design to embedded system, software development and image processing. In particular, the focus of the project was to:
- Design and build the robots. Students selected all the sensors, drivers, microcontrollers and other components needed. They assembled the robot and programed the microcontrollers to perform the desired tasks.
- Develop control algorithms for the robots. Students developed high-level and low-level control algorithms that allowed the robots to follow a desired reference path. Algorithms were also developed to estimate the position of the robot using encoder data.
- Develop a path planning strategy that, given a mission, would be able to determine the optimal reference path for the robot. The strategy also included collision avoidance algorithms.
- Develop an image processing algorithm that allowed the robots to recognize know targets. A picture of the know target was used as reference and compared against the images collected by the robot's camera to identify the target.
The project was funded by the MIT Lincoln Laboratory
MITLL Mentors: Tim Gallagher and David Browne
This project aimed at developing a robust communication platform for autonomous robots using USRPs and the 802.11p protocol.
Video Credit: Ryan Horns, Public Relations Coordinator,
Electrical and Computer Engineering, The Ohio State University
The protocol selected for the communication platform was the. 802.11p. The students implemented a transceiver using the GNU radio software. Starting from an open source code of the PHY layer, the students reviewed the IEEE standards for the protocol and implemented the MAC layer. This allowed to simulate the network with multiple agents trying to communicate and allowed to tune some design parameters to optimize the communication between the agents. Simulations were also run with using the software developed and USRPs (Ettus B210).
MITLL Mentors: Tim Gallagher and David Browne
The project focused on migrating an existing Web App from Amazon Web Server to an osu.edu server. The web application utilized streaming video through the Wowza streaming engine and was running with apache, php and mysql. The website was based on Codeigniter, a light MVC framework. Students used PHP and MySQL.
The project was funded through one of Prof. Chao's projects.
Project done in collaboration with:
Theodore Chao, Ph.D.
Assistant Professor, Mathematics Education
College of Education and Human Ecology
Department of Teaching and Learning
Students worked with graduate students in the College of Education to develop STEM Education Mobile apps. STEM Education graduate students were working in teams researching and designing mobile apps for Math, Science, Engineering, or Computer Science Education. ECE students worked with these STEM Education graduate students to help them conceptualize their mobile app, to develop a prototype of this mobile app, and to assist in the research of learning that happens when this app is used with real children. The mobile apps were based upon existing research on how children learn Math, Science, Engineering, and Computer Science. The mobile app prototypes were developed and tested during the months of October and November, 2017. These apps were presented at a showcase event at the end of the Fall 2017 semester.
Project done in collaboration with:
Theodore Chao, Ph.D.