A preliminary study of the effects of a novice hacker's learning process on a computer hardware and base operating system component performance
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Your vote was cast
Thank you for your feedback
Thank you for your feedback
KeywordResearch Subject Categories::TECHNOLOGY::Information technology::Computer science::Computer science
Computer crimes -- Prevention
Computer crimes -- Research
Computer networks -- Security measures
Malware (Computer software) -- Research
Hackers -- Research
MetadataShow full item record
AbstractOne of the major problems in computer security today is the mitigation of damage caused by malware. Common approaches for gathering information about this threat have been to investigate and utilize the structure of a malware attack for prevention and reduction of damage, or analysis of the effect of malware originally found in the wild on target computer systems. This thesis provides a means of determining whether or not sufficient information exists to examine the possibility of finding or identifying an inexperienced hacker inside of a computer system. Analysis of pseudo-ransomware inside a virtual machine was performed, with investigation into the performance of the system’s hardware and base operating system components. It was discovered that CPU load was the core of indicators that displayed the presence of possible ransomware, as it consistently displayed longer process completion times and signs of strain under intensified usage. Furthermore, this factor could be paired with statistics for other areas of the system in order to provide more detail about the attack itself.
The following license files are associated with this item:
- Creative Commons
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States
Showing items related by title, author, creator and subject.
Heterogeneous implementation of an artificial neural networkCarvino, Anthony; Coppola, Thomas (2018-05)Reconfigurable logic devices, such as Field Programmable Gate Arrays (FPGA), offer ideal platforms for the dynamic implementation of embedded, low power, massively parallel neuromorphic computing systems. Though somewhat inferior to Application Specific Integrated Circuits (ASIC) with regard to performance and power consumption, FPGAs compensate for this small discrepancy by providing a versatile and reconfigurable fabric that is capable of implementing the logic of any valid digital system. Using the Xilinx ZYNQ 7 Series All Programmable System on Chip, as actuated and exposed by the PYNQ-Z1 Development Environment, the present work aims to provide a demonstration of the efficacy of the heterogeneous approach to neuromorphic computing. We expose a hardware implementation of a configurable neural layer to the processing system as a software module and handle its data and parameter flow at the productivity level using Python. Results indicate a nearly negligible increase (3%) in dynamic power consumption over that consumed by the processing system alone. Further, by specifically utilizing the embedded Digital Signal Processing (DSP) and memory blocks of the ZYNQ device, we employ a relatively large percentage of these resources (13% and 11%, respectively), but consume only 5% of the Lookup Table (LUT) fabric, preserving the vast majority of resources for the implementation of other, perhaps complementary systems. Although the successfully completed heterogeneous system demonstrates that it possesses the capacity to learn, the proper training of neuromorphic systems such as this Artificial Neural Network (ANN) is a project in and of itself, and so the focus herein is more on the heterogeneous system engineered than on the prototypical application selected, which is text-independent speaker verification using Mel Frequency Cepstral Coefficients (MFCC) and log-filterbank energies as features. Fast, low power, small footprint neuromorphic systems are desirable for embedded applications that might improve the state of their art by exploiting applied artificial intelligence. Systems such as the configurable neural layer developed herein – which make use of the naturally versatile, low power, and high-performance FPGA in conjunction with a microprocessor control system – seem not only technologically viable, but well suited for handling intelligent embedded applications.
Raspberry pi embedded operating system and runtimePerry, James J. (2016-05)This thesis explores the creation of a small footprint, high-performance Embedded Operating System (EOS) for the Raspberry Pi (RPi). Using a customization approach, the image is configures to include only required functions and omits nonessential functions. The result preserves available memory and storage for use during runtime of an embedded solution. As part of this process, the thesis leverages the resulting runtime environment to provide complex functions (i.e. inter process messaging and GPIO support) that run atomically (noninterruptible).
A generative chatbot with natural language processingLiebman, David (2020-12)The goal in this thesis is to create a chatbot, a computer program that can respond verbally to a human in the course of simple day-to-day conversations. A deep learning neural network model called the Transformer is used to develop the chatbot. A full description of a Transformer is provided. The use of a few different Transformer-based Natural Language Processing models to develop the chatbot, including Generative Pre-Training 2 (GPT2), are shown. For comparison a Gated Recurrent Unit (GRU) based model is included. Each of these are explained below. The chatbot code is installed on a small device such as the Raspberry Pi with speech recognition and speech-to-text software. In this way a device that can carry out a verbal conversation with a human might be created. For the GRU-based model a Raspberry Pi 3B with 1GB RAM can be used. A Raspberry Pi 4B with 4GB of RAM is needed to run a chatbot with the GPT2.