A preliminary study of the effects of a novice hacker's learning process on a computer hardware and base operating system component performance
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Your vote was cast
Thank you for your feedback
Thank you for your feedback
KeywordResearch Subject Categories::TECHNOLOGY::Information technology::Computer science::Computer science
Computer crimes -- Prevention
Computer crimes -- Research
Computer networks -- Security measures
Malware (Computer software) -- Research
Hackers -- Research
MetadataShow full item record
AbstractOne of the major problems in computer security today is the mitigation of damage caused by malware. Common approaches for gathering information about this threat have been to investigate and utilize the structure of a malware attack for prevention and reduction of damage, or analysis of the effect of malware originally found in the wild on target computer systems. This thesis provides a means of determining whether or not sufficient information exists to examine the possibility of finding or identifying an inexperienced hacker inside of a computer system. Analysis of pseudo-ransomware inside a virtual machine was performed, with investigation into the performance of the system’s hardware and base operating system components. It was discovered that CPU load was the core of indicators that displayed the presence of possible ransomware, as it consistently displayed longer process completion times and signs of strain under intensified usage. Furthermore, this factor could be paired with statistics for other areas of the system in order to provide more detail about the attack itself.
The following license files are associated with this item:
- Creative Commons
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States
Showing items related by title, author, creator and subject.
Heterogeneous implementation of an artificial neural networkCarvino, Anthony; Coppola, Thomas (2018-05)Reconfigurable logic devices, such as Field Programmable Gate Arrays (FPGA), offer ideal platforms for the dynamic implementation of embedded, low power, massively parallel neuromorphic computing systems. Though somewhat inferior to Application Specific Integrated Circuits (ASIC) with regard to performance and power consumption, FPGAs compensate for this small discrepancy by providing a versatile and reconfigurable fabric that is capable of implementing the logic of any valid digital system. Using the Xilinx ZYNQ 7 Series All Programmable System on Chip, as actuated and exposed by the PYNQ-Z1 Development Environment, the present work aims to provide a demonstration of the efficacy of the heterogeneous approach to neuromorphic computing. We expose a hardware implementation of a configurable neural layer to the processing system as a software module and handle its data and parameter flow at the productivity level using Python. Results indicate a nearly negligible increase (3%) in dynamic power consumption over that consumed by the processing system alone. Further, by specifically utilizing the embedded Digital Signal Processing (DSP) and memory blocks of the ZYNQ device, we employ a relatively large percentage of these resources (13% and 11%, respectively), but consume only 5% of the Lookup Table (LUT) fabric, preserving the vast majority of resources for the implementation of other, perhaps complementary systems. Although the successfully completed heterogeneous system demonstrates that it possesses the capacity to learn, the proper training of neuromorphic systems such as this Artificial Neural Network (ANN) is a project in and of itself, and so the focus herein is more on the heterogeneous system engineered than on the prototypical application selected, which is text-independent speaker verification using Mel Frequency Cepstral Coefficients (MFCC) and log-filterbank energies as features. Fast, low power, small footprint neuromorphic systems are desirable for embedded applications that might improve the state of their art by exploiting applied artificial intelligence. Systems such as the configurable neural layer developed herein – which make use of the naturally versatile, low power, and high-performance FPGA in conjunction with a microprocessor control system – seem not only technologically viable, but well suited for handling intelligent embedded applications.
MapReduce based convolutional neural networksLeung, Jackie (2018-08)Convolutional neural networks (CNNs) have gained global recognition in advancing the field of artificial intelligence and have had great successes in a wide array of applications including computer vision, speech and natural language processing. However, due to the rise of big data and increased complexity of tasks, the efficiency of training CNNs have been severely impacted. To achieve state-of-art results, CNNs require tens to hundreds of millions of parameters that need to be fine-tuned, resulting in extensive training time and high computational cost. To overcome these obstacles, this thesis takes advantage of distributed frameworks and cloud computing to develop a parallel CNN algorithm. Close examination of the implementation of MapReduce based CNNs as well as how the proposed algorithm accelerates learning are discussed and demonstrated through experiments. Results reveal high accuracy in classification and improvements in speedup, scaleup and sizeup compared to the standard algorithm.
A generative chatbot with natural language processingLiebman, David (2020-12)The goal in this thesis is to create a chatbot, a computer program that can respond verbally to a human in the course of simple day-to-day conversations. A deep learning neural network model called the Transformer is used to develop the chatbot. A full description of a Transformer is provided. The use of a few different Transformer-based Natural Language Processing models to develop the chatbot, including Generative Pre-Training 2 (GPT2), are shown. For comparison a Gated Recurrent Unit (GRU) based model is included. Each of these are explained below. The chatbot code is installed on a small device such as the Raspberry Pi with speech recognition and speech-to-text software. In this way a device that can carry out a verbal conversation with a human might be created. For the GRU-based model a Raspberry Pi 3B with 1GB RAM can be used. A Raspberry Pi 4B with 4GB of RAM is needed to run a chatbot with the GPT2.