LATEST PROJECTS
Project | 01
Project | Positive Unlabeled Learning in Object Detection
​
Classical supervised learning is to train on fully labeled examples and make predictions on test examples. In object detection applications, many methods use supervised learning algorithms. Supervised learning requires a large amount of fully labeled data to learn a function that approximates the distribution of the dataset. However, even though there are public datasets contain fully labeled data (PSCAL VOC, Microsoft COCO, ImageNet), the labels are inconsistent and incomprehensive. To address this problem, adapting Positive-Unlabeled Learning (PU Learning) may provide further insights and to resolve this problem.
Project | 02
Project | Interpretability and Transfer Learning of Deep Neural Network
​
A project to determine why deep learning models can perform at such high levels and why it makes certain predictions. Studying the interpretability of deep learning models. This project is to look for clues inside the data we train. The idea was to synthesize data that we have the full information; we trained on these data and with additional information from inside the network. Our aim was to understand what particular part of the dataset gives us the result predictions.
Project | 02
Project | Multiple Instance Learning for Training Weekly Supervised Learning Models
​
We started by using a simple machine learning algorithm like support vector machine on synthesized to verify the hypothesis that using multiple instance learning can learn effective information even with minimum information of the dataset. Through an iterative process, we could end up with labels for each object with very high confidence. Then we upgraded our system with deep learning models like the convolutional neural network to detect buildings from satellite imageries.
​
Project | 03
Project | Prediction of Seizure in EEG and ECG signals
​
The project was applying machine learning algorithms that help Duke Hospital predict an occurrence of a seizure of a patient through EEG (brain) and ECG (heart) signals. We simplified the problem for immediate use by training a random forest model through a list of handmade features. In the end, we could predict a seizure sometime in advance at around 88% accuracy.