• Detection of Brain Tumor in Magnetic Resonance Imaging (MRI) Images using Fuzzy C-Means and Thresholding

      Andriamanalimanana, Bruno; Kalakuntla, Shashank; Andriamanalimanana, Bruno R.; First Reader; Novillo, Jorge E.; Second Reader; Spetka, Scott; Third Reader (SUNY Polytechnic Institute, 2020-08)
      Although many clinical experts or radiologists are well trained to identify tumors and other abnormalities in the brain, the identification, detection and segmentation of the affected area in the brain is observed to be a tedious and time consuming task. MRI has been a conventional and resultant image processing technique to visualize structures of the human body. It is very difficult to visualize abnormal structures of the brain using simple imaging techniques. MRI technique uses many imaging modalities that scan and capture the internal structure of the human brain. Even with the use of these techniques, it is a difficult and tedious task for a human eye to be always sophisticated in detecting brain tumors from these images. With emerging technology, we can provide a way to ease the process of detection. This project focuses on identification of brain tumor in MR images, it involves in removing noise using noise removal technique AMF followed by enhancing the images using Balance Enhancement Contrast technique (BCET).Further, image segmentation is performed using fuzzy c-means and finally the segmented images are produced as an input to a canny edge detection resulting with the tumor image. This report entices the approach, design, and implementation of the application and finally the results. I have tried implementing/developing this application in Python. The Jupyter notebook provides a block simulation for the entire flow of the project.
    • An Empirical Wi-Fi Intrusion Detection System

      Kholidy, Hisham A.; Basnet, Diwash Bikram; Kholidy, Hisham A.; Advisor (SUNY Polytechnic Institute, 2020-05)
      Today, the wireless network devices are growing rapidly, and it is of utmost importance for securing those devices. Attackers or hackers use new methods and techniques to trick the system and steal the most important data. Intrusion Detection Systems detect the attacks by inspecting the network traffics or logs. The work demonstrated the effectiveness of detecting the attacks using machine learning techniques on the AWID dataset, which is produced from real wireless network logging. The author of the AWID dataset may have used several supervised learning models to successfully detect the intrusions. In this paper, we propose a newer approach for intrusion detection model based on dense neural networks, and long short-term memory networks (LSTM) and evaluate the model against the AWID-CLS-R subset. To get the best results from the model, we applied feature selection by replacing the unknown data with the value of “none”, getting rid of all repeated values, and kept only the important features. We did preprocess and feature scaling of both training and testing dataset, additional we also change the 2-dimensional to the 3- dimensional array because LSTM takes an input of 3-dimensional array, and later we used flatten layers to change into a 2-dimensional array for output. A comprehensive evaluation of DNN and LSTM networks are used to classify and predict the attacks and compute the precision, recall, and F1 score. We perform binary classification and multiclass classification on the dataset using neural networks and achieve accuracy ranging from 86.70 % to 96.01%.
    • ?Generic Datasets, Beamforming Vectors Prediction of 5G Celleular Networks

      Kholidy, Hisham A.; Singh, Manjit; Kholidy, Hisham A.; Advisor (SUNY Polytechnic Institute, 2020)
      The early stages of 5G evolution revolves around delivering higher data speeds, latency improvements and the functional redesign of mobile networks to enable greater agility, efficiency and openness. The millimeter-wave (mmWave) massive multiple-input-multiple-output (massive MIMO) system is one of the dominant technology that consistently features in the list of the 5G enablers and opens up new frontiers of services and applications for next-generation 5G cellular networks. The mmWave massive MIMO technology shows potentials to significantly raise user throughput, enhances spectral and energy efficiencies and increases the capacity of mobile networks using the joint capabilities of the huge available bandwidth in the mmWave frequency bands and high multiplexing gains achievable with massive antenna arrays. In this report, we present the preliminary outcomes of research on mmWave massive MIMO (as research on this subject is still in the exploratory phase) and study two papers related to the Millimeter Wave (mmwave) and massive MIMO for next-gen 5G wireless systems. We focus on how a generic dataset uses accurate real-world measurements using ray tracing data and how machine learning/Deep learning can find correlations for better beam prediction vectors through this ray tracing data. We also study a generated deep learning model to be trained using TensorFlow and Google Collaboratory.
    • Non-Convex Optimization: RMSProp Based Optimization for Long Short-Term Memory Network

      Andriamanalimanana, Bruno; Yan, Jianzhi; Andriamanalimanana, Bruno; First Reader; Chiang, Chen-Fu; Second Reader; Novillo, Jorge; Third Reader (SUNY Polytechnic Institute, 2020-05-09)
      This project would give a comprehensive picture of non-convex optimization for deep learning, explain in details about Long Short-Term Memory (LSTM) and RMSProp. We start by illustrating the internal mechanisms of LSTM, like the network structure and backpropagation through time (BPTT). Then introducing RMSProp optimization, some relevant mathematical theorems and proofs in those sections, which give a clear picture of how RMSProp algorithm is helpful to escape the saddle point. After all the above, we apply it with LSTM with RMSProp for the experiment; the result would present the efficiency and accuracy, especially how our method beat traditional strategy in non-convex optimization.
    • A Wireless Intrusion Detection for the Next Generation (5G) Networks

      Kholidy, Hisham A.; Ferrucci, Richard; Kholidy, Hisham A.; Advisor (SUNY Polytechnic Institute, 2020-05)
      5G data systems are closed to delivery to the public. The question remains how security will impact the release of this cutting edge architecture. 5G data systems will be sending massive amounts of personal data due to the fact that everybody in the world is using mobile phones these days. With everyone using a 5G device, this architecture will have a huge surface area for attackers to compromise. Using machine learning techniques previously applied to 802.11 networks. We will show that improving upon these previous works, we can have a better handle on security when it comes to 5G architecture security. We find that using a machine learning classifier known as LogIT boost, combined with a selected combination of feature selection, we can provide optimal results in identifying three different classes of traffic referred to as normal, flooding, and injection traffic. We drastically decrease the time taken to perform this classification while improving the results. We simulate the Device2Device (D2D) connections involved in the 5G systems using the AWID dataset. The evaluation and validation of the classification approach are discussed in details in this thesis.