Tripods/StemForAll 2022 Projects


Derivatives and Neural Networks

Project supervisors: Alex Iosevich (UR), Azita Mayeli (CUNY) and Brian McDonald (UR)

Project description: We are going to study, both theoretically and empirically, the impact of adding discrete derivatives of the historical data as regressors in order to improve the performance of neural network prediction models. In simple terms, if we give a neural network a sequence of real numbers and ask it to predict the next few values, how helpful is it to provide the model with the consecutive difference (and/or second differences) of the elements of the sequence?

References: https://www.sciencedirect.com/science/article/abs/pii/S0893608005800206

Team: Amy Fang, Josh Iosevich, Anya Myakushina, Svetlana Pack, Maxwell Sun, and Stephanie Wang


Erdos problems and the Vapnik-Chervonenkis dimension

Project supervisors: Alex Iosevich (UR), Brian McDonald (UR) and Emmett Wyman (UR)

Project description: We are going to study the existence and complexity of finite point configurations in vector spaces over finite fields using the notion of VC-dimension, and investigate connections with related notions from learning theory.

References: arXiv:2203.03046, arXiv:2108.13231(www.arxiv.org)

Team:
James Hanby (RIT), Tran Duy Anh Le (UR), Maxwell Sun (MIT)


Natural language processing, reinforcement learning and web scraping

Project supervisor: Alex Iosevich (UR) and Scott Kirila (Parker Avery)

Project description: We are going to develop a mechanism to quickly identify which academic department a given university page belongs to, which news outlet a given front page story was published in, and similar web scraping ideas. In the process we are going to develop a productive interaction between reinforcement learning and support vector machine methods.

References: https://www.mdpi.com/2078-2489/12/1/38/htm, https://www.geeksforgeeks.org/top-7-applications-of-natural-language-processing/

Team: Moeed Baradan, Huanyu Chen, Peirong Hao, Yumeng He, Bowen Jin, Zhizhi Jing, Junfei Liu, Jiayue Meng,
Yixu Qiu, Yukun Yang
 

Neural networks and sales models with economic indicators

Project supervisor: Alex Iosevich (UR) and Scott Kirila (Parker Avery)

Project description: Many sales models starting returning less than stellar results during the Covid era, in part because the training data came from before the Covid period. In this project we are going to take several readily available data sets containing sales data and try to come up with the right mix of economic (and other) indicators that will make predictions as stable as possible across time, including the Covid period.

References: arXiv:2105.01036

Team: Moeed Baradan, Veronica Chistaya (house price variant), Ji Fang,
Peirong Hao, Bingyi Liu, Kuixian Wu


Neural networks, approximation and geometric measure theory

Project supervisors: Alex Iosevich (UR) and Emmett Wyman (UR)

Project description: Neural networks are "universal approximators" in that any Lipschitz function can be approximated by a neural network arbitrarily closely. This is a fundamental result, but many real-life data sets are not realistically described by a Lipschitz function because the Lipschitz condition limits volatility. In this project we are going to explore the universal approximation in the case when Lipschitz functions are replaced by more complicated (and hopefully more realistic) classes of function, such as function with graphs satisfying a suitable fractal dimension condition.

References: https://machinelearningmastery.com/neural-networks-are-function-approximators/, http://neuralnetworksanddeeplearning.com/chap4.html

Team: Amy Fang, Zhizhi Jing, Peter MacNeil, Yixu Qiu, Rohan Soni, Jake Wellington, Yukun Yang



Optimal location for charging stations for electric cars

Project supervisors: Alex Iosevich (UR) and Steven Senger (Missouri State University)

Project description: We are going to build a model to determine the optimal location for charging stations for electric cars in Rochester and Ithaca.

References: https://www.sciencedirect.com/science/article/pii/S2352484722001809

Team: Rachel Dennis, Konstantin Dits, Caroline He, Anya Myakushina


Modeling seizures using machine learning

Project supervisor: Alex Iosevich

Project description: It is widely believed in the medical community that epileptic seizures do not follow a particular daily or weekly time patterns. We believe that the techniques of modern data science have not yet been deployed in the study of this problem in a systematic way. We are going to experiment with a variety of techniques, including reinforcement learning, to look for patterns in the commercially available data sets.

References: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2739976/, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5801770/

Team: James Hanby, Marco Minchev, Svetlana Pack


Multi-task learning

Project supervisors: Alex Iosevich (UR) and Nate Whybra

Project description: One of the most notable differences between most machine learning algorithms and humans is that humans have the ability to do multiple tasks. In order to develop AI that can more closely mimic human inference and learning capabilities, they must be able to generalize information from their environment, and use that information flexibly to perform an arbitrary number of tasks. We are going to study the idea of building a large neural network and training it on multiple tasks, then identify substructures of this large network that achieve the same efficacy as the large network or neural networks built for each task individually. We will identify task-model substructures from the large network using binary masks over the connections between nodes as well as pruning techniques.

References: -Michael Crawshaw. “Multi-Task Learning with Deep Neural Networks: A Survey”. In: (Sept. 2020). url:
http://arxiv.org/abs/2009.09796.
-Shagun Sodhani et al. “Environments and Baseline for Multitask Reinforcement Learning”. In: (2021). url:
https://ep2021.europython.eu/media/conference/slides/5sUtdJv-multitask-reinforcement-learning-with-python.pdf
-Jonathan Frankle and Michael Carbin. “The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural
Networks”. In: (Mar. 2018). url: http://arxiv.org/abs/1803.03635.
-Hattie Zhou et al. “Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask”. In: (May 2019). url:
http://arxiv.org/abs/1905.01067.
-Namhoon Lee, Thalaiyasingam Ajanthan, and Philip H. S. Torr. “SNIP: Single-shot Network Pruning based
on Connection Sensitivity”. In: (Oct. 2018). url: http://arxiv.org/abs/1810.02340.
-Network Pruning 101. url:
https://towardsdatascience.com/neural-network-pruning-101-af816aaea61

Team: Ryan Hilton, Bowen Jin, Vicky Wang, Kevin Xu, Zhiyao Xu