M.Sc. Computer Science

Permanent URI for this collectionhttps://hdl.handle.net/10464/2881

Browse

Recent Submissions

Now showing 1 - 20 of 139
  • ItemOpen Access
    Emerging Variants and Fading Immunity: Analyzing the Impact in Epidemic Modeling
    Yasmeen, Farhana; Yasmeen, Farhana; Department of Computer Science
    Outbreaks like the recent COVID-19 pandemic underscore the importance of quick and informed responses to control and mitigate the epidemic. This thesis aims to develop a model that can provide deeper insights into how epidemics progress and behave. Accurate epidemic simulation can provide valuable insights into how an epidemic affects the population. This thesis considers both epidemic spread and epidemic severity, and integrates fading immunity into the epidemic model, creating a more realistic representation of real-life scenarios. This research also extends the model by considering diverse population structures in the contact network where different age groups have different levels of immunity strength. An evolutionary algorithm was used to generate and evolve personal contact networks. We analyzed the epidemic dynamics within these networks, focusing on how different proportions of young and old individuals impact the spread and severity of the epidemic. Results reveal that older populations with weaker immunity experience more severe infections, while younger populations with stronger immunity mitigate both spread and severity. The thesis also explores the impact on variant generation, showing that when using the epidemic spread fitness function there is a tendency to produce more variants than when using the epidemic severity fitness function, highlighting the virus's need to mutate in response to existing immunity. When the population is dominated by younger individuals, even though fewer variants are being generated, successful variants tend to exhibit a higher mutation distance to overcome the robust immunity present in the community.
  • ItemEmbargo
    GNN-based Handover Management in 5G Vehicular Networks
    Mehregan, Nazanin; Department of Computer Science
    The rapid advancement of 5G technology has transformed vehicular networks, delivering high bandwidth, low latency, and faster data rates for real-time applications in smart vehicles and smart cities. This enhances traffic safety and the quality of entertainment services. However, challenges remain, such as 5G's limited coverage range, which requires the installation of additional base stations, and frequent handovers, known as the "ping-pong effect," that can cause network instability, especially in high-mobility environments. Traditional reactive methods struggle to manage these issues effectively. In this study, we propose TH-GCN (Throughput-oriented Graph Convolutional Network), which optimizes handover management in dense 5G environments using graph neural networks (GNNs). TH-GCN predicts optimal connections and the best handover choices by modeling vehicles and towers as nodes in a dynamic graph, with connections depicted as edges and incorporating features like signal quality, vehicle mobility, throughput, and tower load. By shifting from a purely user-centric to a combined user equipment and base station-centric approach, our method provides a comprehensive view of the network and enhances adaptability in real-time handover decisions. After conducting several batch tests in our Simu5G simulator, the results showed significant improvements, including up to a 78% reduction in handovers and a 10% improvement in signal quality compared to state-of-the-art methods. TH-GCN effectively reduced handovers while maintaining optimal levels of throughput and latency, particularly in high-density, high-mobility scenarios.
  • ItemOpen Access
    Federated Learning on Knowledge Graphs via Contrastive Alignment
    Mahmud, Antor; Department of Computer Science
    In traditional federated learning (FL) frameworks for knowledge graph embeddings (KGE), individual clients train their local KGE models independently, and a central server collects and aggregates (e.g., by averaging) these models to produce a global one. This process ensures data privacy throughout the FL training process, as the server does not require direct access to clients’ data. However, the performance of traditional FL global aggregation algorithms is significantly challenged by the non-identical distribution of data across clients’ knowledge graphs. To tackle this issue, we introduce AlignNet, a novel supervised contrastive learning (CL) approach that helps align both entity and relation embeddings across clients in federated settings. AlignNet works by pulling similar embeddings closer together while pushing dissimilar ones further apart, using only the existence of entities and relations without accessing the underlying data or detailed associations. This alignment process ensures robustness and better generalization across diverse clients, while still maintaining privacy. Our experiments on benchmark datasets, show that AlignNet consistently outperforms current FL methods, especially with more complex models and datasets. We found that AlignNet effectively reduces the variability and noise introduced by the FL process. While traditional FL setups tend to lose performance as more clients join the aggregation process, AlignNet improves as the number of clients increases. This makes AlignNet a strong choice for large-scale federated settings with many clients and diverse data. Overall, our results show that AlignNet is a scalable and reliable solution for federated KGE, making it an excellent fit for real-world applications like healthcare, finance, and distributed IoT networks, where handling data diversity and maintaining performance at scale are crucial.
  • ItemOpen Access
    Application of L-Fuzzy Relation to Social Choice Theory
    Osei, Clement Frimpong; Department of Computer Science
    In situations like voting, decisions are made based on individual preferences. The majority rule might not always be the best choice for aggregating individual preferences, and individual preferences may be uncertain or partial. For example, someone may prefer a candidate over another, not absolutely, but only up to a certain degree. The objective research of this proposal is to investigate this kind of L-Fuzzy Social Choice Theory, focusing on mathematically modeling personal preferences with uncertainty. It includes implementing visualizations and computations of the three common approaches for modeling individual preferences. The goal is to provide the most general framework in which these three common models of L-fuzzy preference are equivalent.
  • ItemOpen Access
    Towards Trustworthy AI: Investigating Bias and Confidence Alignment in Large Language Models
    Kumar, Abhishek; Department of Computer Science
    LLMs are increasingly integrated into critical fields such as healthcare, judiciary, education, etc, thoroughly evaluating their trustworthiness is becoming ever more essential. This research thesis presents a unified examination of two critical aspects of trustworthiness in LLMs: their self-evaluation of confidence and the subtler biases that shape their outputs. In the first part, we introduce the concept of Confidence Probability Alignment to scrutinize how LLMs’ internal confidence, indicated by token probabilities, aligns with the confidence they express when queried about their certainty. This analysis is enriched by employing diverse datasets and prompting techniques aimed at encouraging model introspection, such as structured evaluation scales and the inclusion of answer options. Notably, OpenAI’s GPT-4 emerges as a leading example, demonstrating strong confidence-probability alignment, signifying a step towards understanding and improving LLM reliability. Conversely, the second part addresses the nuanced biases LLMs exhibit towards specific social narratives and identities, introducing the Representative Bias Score (RBS) and the Affinity Bias Score (ABS) to quantify these biases. Our exploration into representative and affinity biases through the Creativity-Oriented Generation Suite (CoGS) reveals a pronounced preference in LLMs for outputs reflecting the experiences of predominantly white, straight, and male identities. This trend not only mirrors but potentially exacerbates societal biases, highlighting a complex interaction between human and machine bias perceptions.
  • ItemOpen Access
    Memetic Algorithm for Large-Scale Real-World Vehicle Routing Problems with Simultaneous Pickup and Delivery with Time Windows
    Gibbons, Ethan; Department of Computer Science
    The vehicle routing problem is a combinatorial optimization problem which has many real-world applications from supply chain management to public transportation. Many variants of the vehicle routing problem (VRP) exist with different constraints to reflect a variety of transportation scenarios faced by different industries. In this thesis, we examine the VRP variant referred to as the vehicle routing problem with simultaneous pickup and delivery with time windows (VRPSPDTW). In particular, we tackle a set of 20 recently released large-scale VRPSPDTW problem instances that were derived from the data of actual customers served by the transportation company known as JD Logistics. A memetic algorithm (MA) using an altered version of the Best-Cost-Route-Crossover is proposed and applied to this problem set. The proposed MA is able to find new best known solutions and performs better on average for all 20 instances in comparison to previous efforts. Comparative experiments are performed with other state-of-the-art crossovers to validate the effectiveness of the altered BCRC when used in the proposed MA. In addition, the dynamic VRPSPDTW (DVRPSPTW) is introduced by providing a mathematical formulation of the problem and transforming existing VRPSPDTW instances into dynamic instances. We perform a preliminary study on these dynamic instances using the proposed MA in conjunction with a simple but effective vehicle loading strategy, and the results are provided to promote further research into this dynamic variant.
  • ItemOpen Access
    A Comparative Study of Evolutionary Algorithms and Particle Swarm Optimization Approaches for Constrained Multi-Objective Optimization Problems
    McNulty, Alanna; Department of Computer Science
    Many real-world problems in the fields of science and engineering contain multiple conflicting objectives which need to optimized simultaneously, as well as additional problem constraints. These problems are referred to as constrained multi-objective optimization problems (CMOPs). CMOPs do not have a single optimal solution, but instead a set of optimal trade-off solutions where an improvement in one objective worsens another. Recently, many constrained multi-objective evolutionary algorithms (CMOEAs) have been introduced for solving CMOPs. Each of these computational intelligence algorithms can be classified into one of four different approaches, which are the classic CMOEAs, co-evolutionary approaches, multi-stage approaches, and multi-tasking approaches. An extensive survey and comparative study of the aforementioned algorithms on a variety of benchmark test problems, including real-world CMOPs, is carried out in order to determine the current state-of-the-art CMOEAs. Additionally, this work proposes a multi-guide particle swarm optimization (MGPSO) for the constrained multi-objective problems. MGPSO is a multi-swarm approach, which has previously been effectively applied to other challenging optimization problems in the literature. This work adapts MGPSO for solving CMOPs and compares its performance against the aforementioned existing computational intelligence techniques. The comparative study showed that the algorithmic performance was problem-dependent. Lastly, while the proposed MGPSO approach was likewise problem-dependent, it was found to perform best for some of the real-world problems, namely the process, design and synthesis problems, and had competitive performance in the power system optimization problems.
  • ItemOpen Access
    The Application of Chaos Game Representations and Deep Learning for Grapevine Genetic Testing
    Vu, Andrew; Department of Computer Science
    The identification of grapevine species, cultivars, and clones associated with desired traits such as disease resistance, crop yield, crop quality, etc., is an important component of viticulture. True-to-type identification has proven to be very critical and yet very challenging for grapevine due to the existence of a large number of cultivars and clones and the historical issues of synonyms and homonyms. DNA-based identification, superior to morphology-based methods in accuracy, has been used as the standard genetic testing method, but not without shortcomings. To overcome some of the limitations of the traditional microsatellite-marker based on genetic testing, we explored a whole genome sequencing (WGS)-based approach by taking advantage of the latest next-generation sequencing technologies (NGS) for achieving the best accuracy and better availability at affordable cost. To address the challenges of the extremely high dimensional nature of the WGS data, we examined the effectiveness of using Chaos Game Representation (CGR) for representing the genome sequence data and the use of deep learning in visual analysis for species and cultivar identification. We found that CGR images provide a meaningful way of capturing patterns and motifs for use with visual analysis, with the best prediction results demonstrating a 0.990 mean balanced accuracy in classifying a subset of five species. Our preliminary research highlights the potential for CGR and deep learning as a complementary tool for WGS-based species-level and cultivar-level classification.
  • ItemOpen Access
    Topic Modeling-based Logging Suggestions for Java Software Systems
    Akter, Mehenika; Department of Computer Science
    Log statements help software developers and end users get informed about different valuable run-time information while log levels categorize the severity of that information. Researchers have been working extensively on log-related problems for the last two decades. As a result, a good amount of research has been conducted on logging and its practices. However, determining which topics can be logged from a system has a potential to work on. To implement our study, first, we examined the code snippets from some renowned open-source Java language-based projects. We collected the logged methods from nine applications and after preprocessing the methods and extracting our required data, we applied some renowned topic models: Latent Dirichlet Allocation (LDA), Latent Semantic Analysis (LSA), and Non-negative Matrix Factorization (NMF). In the first part of the results, we showed how the topics are related to logging to investigate the alignment between topic modeling and logging. Our dataset, enriched with meaningful words related to method functionality, is subjected to LDA analysis. Results indicate that topics with the highest sum of word probabilities are more likely to be logged. In the second section, we listed the popular topics with their associated words from different systems generated by LDA. In the last part of the results, a comprehensive result was shown by evaluating the performance of the models using coherence scores. We believe that our research will not only be useful for its result and evaluation but also be helpful for future researchers by providing a unique dataset.
  • ItemOpen Access
    LSTM-oriented Handover Decision-making with SVR-based Mobility Prediction in 5G Vehicular Networks
    Chowdhury, Shajib; Department of Computer Science
    The advancement of 5G technology is initiating a transformation era for Vehicular Networks (VN), enabling seamless communication among vehicles and other entities. Connected vehicles hold significant potential for improving traffic safety, and enhancing in-vehicle entertainment. With the increasing of vehicular applications, the necessity for reliable, high-bandwidth, and low-latency connections has become increasingly paramount. Ensuring consistent connections in dynamic vehicular settings remains an ongoing challenge, especially given the necessity for smooth Handovers (HO) between transmission points as vehicles move rapidly. Frequent handovers, due to the limitations of communication range, can impact user experiences, especially in safety-critical situations. One potential solution involves transitioning to network virtualization to address the challenges posed by ultra-dense networks and the limited communication range in 5G. To tackle these challenges, we present an approach based on mobility prediction for selecting virtual cells using Support Vector Regression (SVR) and making Handover (HO) decisions using Long Short-Term Memory (LSTM). Our method, named M-LSVR, focuses on forming user-centric virtual cells based on network attributes and real-time data. The dynamic adjustment of virtual cell size using predictive mobility ensures stability and reduces unnecessary handovers. Integrating mobility prediction with HO decision-making aims to establish a more stable connection, enhancing the quality of virtual cells in high-mobility vehicular environments. This approach aims to optimize the user experience by minimizing unnecessary tower switches and creating efficient, high-quality virtual cells in the 5G vehicular network.
  • ItemOpen Access
    Generating Models of Human Gait in Patients with Parkinson’s Disease
    Navikevicius, Tristan; Department of Computer Science
    Parkinson’s disease is an extremely debilitating condition where the brain is not producing enough dopamine to accurately coordinate movement. One symptom of Parkinson’s disease, freezing of gait, prevents the affected person from either starting to walk or continuing walking. It usually begins in the advanced stages of the disease. The primary medication for Parkinson’s disease, Levodopa, is only partially effective for the treatment of freezing of gait. The dataset studied in this thesis provides time-series gait data of individuals’ gait while performing four different tasks, each having increased complexity over the previous ones. This thesis looks at a time-series gait dataset and performs symbolic regression through genetic programming on that dataset to predict fall likelihood and to create models of the gait of people with and without Parkinson’s disease including people who may be experiencing freezing of gait while factoring in their medication status (ON or OFF). The fall prediction experiment suggests that the GP models can predict the likelihood of falling based on the individual’s gait. The models provide insights into how Parkinson’s disease and freezing of gait impact gait patterns in people who have the disease vs. those who do not and enables us to compare the gait of individuals in different groups. It was found that, as expected, gait was similar within groups and different between groups. We also found that for some individuals it was not possible to distinguish between ON and OFF medication states. Future work might include determining the best models for each individual or group, attempting to find a model that accurately represents the individual or group rather than the individual trials.
  • ItemOpen Access
    Reinforcement Learning-based Time-Dependable Modelling of Fog Connectivity for Software-Defined Vehicular Networks
    Ferdous, Jannatul; Department of Computer Science
    Connected vehicles are crucial in strengthening vehicular and Intelligent Transport Systems (ITS) by enabling autonomous and dynamic data sharing across the vehicular network. Extensive research has been conducted to predict connectivity, alongside thedevelopment of diverse techniques to manage this essential aspect. In recent times, learning methodologies have become increasingly popular for their ability to effec-tively handle sophisticated models adaptively. Various machine learning algorithms have been demonstrated as convincing methods for rendering any system flexible andpredictive. We thus propose a Learning based Adaptive Connectivity Estimation Model LACM. This model calculates and enhances the connectivity among differentstates and actions, monitoring their changes over time. The purpose of this model is to accurately depict the current connectivity status and predict potential fluctuations in fog connectivity. This model will utilize networking and vehicular characteristics to make the accuracy of its predictions. The design of this model aims to tackle the complexity of the problem by incorporating detailed data into a large state space representation, thereby enhancing adaptability. The second part of our work proposes a Time Dependent Connectivity Estimation Model, TDCM. Incorporating time dependency in the model helps to forecast the alterations in cluster lifestyles. It shows the progression of cluster evolution, significantly contributing towards achieving a stable and reliable network. Utilizing Long Short-Term Memory within an RL-based framework enables the system to enhance decision-making accuracy through predictions related to connectivity and network maintenance. Extensive analysis conducted through realistic simulations demonstrated that both LACM and TDCM strongly support estimating and maintaining stable connectivity over time. Our evaluation compared a previous state-of-the-art approach, showing that LACM and TDCM consistently enhanced the connectivity within the network.
  • ItemOpen Access
    Combining the Power of Attention Models and Many-objective Computational Intelligence Algorithms for Drug Design
    Aksamit, Nicholas; Department of Computer Science
    AI-based approaches have been recently applied to in silico drug design. However, existing approaches and protocols consider the absorption, distribution, metabolism, excretion, and toxicity (ADMET) pharmacokinetic properties of drug candidates in a later stage of drug design processes, where failure is most costly. To address this challenge, this research work aims to achieve three objectives. First, it explores the use of Transformer-based models for ADMET prediction based on a hybrid fragment-SMILES tokenization scheme and two training strategies. Second, it evaluates the performance of contrastive Transformer-based latent models for molecular generation. Third, it applies many-objective computational intelligence algorithms in the continuous latent space generated by a Transformer model to generate optimal drug candidates that fulfill ADMET and other essential properties in parallel. The results of this research work demonstrate superiority in the hybrid approach over SMILES in predicting ADMET properties. Furthermore, the system proposed in this study integrates metaheuristics with ADMET prediction and latent Transformer models for solving a drug design problem. A comparative study shows effectiveness of computational intelligence towards a many-objective drug design problem, where 1718 drug-like molecules are obtained after application of a strict filtering criteria.
  • ItemOpen Access
    Aligning Language Models Using Multi-Objective Deep Reinforcement Learning
    Zhang, Yage; Department of Computer Science
    Large Language Models (LLMs) have been a significant landmark of Artificial Intelligence (AI) advancement. Aligning LLMs to be helpful and harmless is a booming trend in Natural Language Processing (NLP). One of the dominant alignment techniques is reinforcement learning from human feedback (RLHF). RLHF aims to optimize one objective based on human preferences. However, the cost of high-quality human feedback is enormous. Having all human annotators consistent in their opinions on desirable behaviors is also challenging. LLM alignment is intrinsically a multi-objective optimization task since the goal is to train models to be helpful and harmless. It is found that helpfulness and harmlessness sometimes have problems in trade-offs, making it difficult for a model trained toward the optimization of one objective to perform well on both. Therefore, to address the highly potentially conflicting or dominating learning signal problem underlying LLM alignment, a multi-objective deep reinforcement learning (MODRL) methodology is proposed. The MODRL algorithm is based on an adapted deep reinforcement learning Advantage-Induced Policy Alignment (APA) algorithm and the Aligned-MTL approach for multi-task learning. From the overall perspective of helpfulness and harmlessness, language models trained via MODRL perform better than those trained using single-objective deep reinforcement learning methods that consider both objectives.
  • ItemOpen Access
    Automatic Generation of Human Readable Proofs
    Hu, Xuehan; Department of Computer Science
    Declarative sentences are statements constructed from propositions that can be either true or false. To be easily manipulated by a program, we present a formal system for handling such declarative sentences called propositional logic. Currently, widely used methods for automatically generating proofs have readability limitations: the deduction process does not conform to human reasoning processes, or the proof tree generated is too complicated. The primary objective of this thesis is to design a program that automatically gen- erates human-readable propositional logic proofs using Natural Deduction. Natural Deduction is a calculus for deriving conclusions from a finite set of premises using proof rules. The deduction process is a tree structure with assumptions as leaves, natural deduction rules as nodes, and the conclusion as the root. This calculus mod- els human reasoning very well because it builds proofs incrementally using logical deductions from known facts and assumptions. Our approach will be capable of proving any valid sentence of Propositional Logic automatically and producing a proof tree in the Natural Deduction calculus.
  • ItemOpen Access
    Language Agnostic Software Energy Kernel Framework
    Dipanzan, Sarwat Islam; Department of Computer Science
    Software efficiency has taken a toll in recent times and code quality and optimization is often an afterthought nowadays. Also there exists no standard operating system support or unified tooling to gather fine grained energy consumption data about source code. Current tooling that exist tackles this problem by running the entire process/application as a whole, therefore localizing the exact part in source code is a blind endeavour. It is also time consuming and expensive to improve such efficiency concerns during the development phase. Coupled with the fact that recent hardware leaps has made it possible to write non-performant software to run relatively fast without much regards to code efficiency. The downside to this phenomenon is that, the hardware compensates for bad code quality by using far more resources increasing energy usage. In this thesis, we focus on an energy centric view of running applications and devise tooling to assist the software developer when choosing libraries, frameworks, programming languages and critical architecture designs. We propose a standard unified way of gathering energy consumption data from the operating system kernel and propose two solutions: a kernel energy module and associated energy reading libraries. The objective is to introspect process/applications without massively altering source code. The idea is to probe into source code and gather energy data for comparison against different implementations to create awareness amongst software developers. The tooling is designed to be application and programming language agnostic so that it can infer runtime metrics without much assumption of the underlying software. This allows to gather virtually any scenario and compare software models with different versions, environments and systems. The thesis also does extensive machine-learning tests using different libraries and synthetic datasets to shed light on ML experiments and their energy consumption. Together with these approaches, the developers can make informed decisions about which part to prioritize improvement and achieve greener software.
  • ItemOpen Access
    Adaptive Logging System: A System Using Reinforcement Learning For Log Placement
    Khosravi Tabrizi, Amirmahdi; Department of Computer Science
    The efficient management of software logs plays a key role in software development, as it allows for the examination of runtime information for post-execution analysis. Given the significance of logs and the possibility that developers may not possess the necessary knowledge to make informed logging decisions, it is important to have a robust log-placement framework that supports developers. Prior attempts to address this challenge have proposed various frameworks, however, these frameworks are either limited to a single logging objective or rely on methods that exhibit poor cross-project consistency. This study introduces a novel performance logging objective to capture and reveal performance bugs, and presents an adaptive software logging approach based on reinforcement learning, which can adapt to multiple logging objectives. This framework is not limited to a specific project and shows superior cross-project accuracy.
  • ItemOpen Access
    Memory Pressure: Early Identification and Proactive Taming in Resource Constrained Devices
    Chakraborty, Pranjal; Department of Computer Science
    Latency critical systems face significant performance challenges when encountering memory pressure, and the existing approaches mainly focus on reactive and instantaneous approaches, but they often fail to accurately identify the root cause of memory pressure, resulting in delayed and ineffective response strategies. In this work, we address this limitation by proposing an alternative approach to proactively detect memory pressure and identify the root cause in such systems. Our method enables the activation and deactivation of extended process-level profiling based on the predicted memory pressure, facilitating the identification of the root cause process. Through evaluation, we achieved an 85% accuracy in forecasting memory pressure situations and correctly identified the responsible process in 83% of use-cases. We also propose an effective adaptive sampling framework to further optimize the system monitoring and data collection, and was able to leverage a state of the art technique to make useful sampling rate changes 78.5% of the time.
  • ItemOpen Access
    Emergent Behaviour in Game AI: A Genetic Programming and CNN-based Approach to Intelligent Agent Design
    Joseph, Marshall; Department of Computer Science
    Emergent behaviour is behaviour that arises from the interactions between the individual components of a system, rather than being explicitly programmed or designed. In this work, genetic programming is used to evolve an adaptive game AI, also known as an intelligent agent, whose job is to capture up to twenty-five prey agents in a simulated pursuit environment. For a pursuit game, the fitness score tallies each prey the predator captured during a run. The fitness is then used to evaluate each agent and choose who moves forward in the evolutionary process. A problem with only choosing the best performing agents is that genetic diversity becomes lost along the way, which can result in monotonous behaviour. Diverse behaviour helps agents perform under situations of uncertainty and creates more interesting computer opponents in video games, as it encourages the agent to explore different possibilities and adapt to changing circumstances. Inspired by the works of Cowan and Pozzuoli in diversifying agent behaviour, and Chen’s work in L-system tree evaluation, a convolutional neural network is introduced to automatically model the behaviour of each agent, something previously done manually. This involves training the convolutional neural network on a large data set of behaviours exhibited by the agents, which take the form of image-based traces. The resulting model is then used to detect interesting emergent behaviour. In the first set of experiments, the convolutional neural network is trained and tested on several sets of traces, then the performance of each run is analyzed. Results show that the convolutional neural network is capable of identifying 6 emergent behaviours with 98% accuracy. The second set of experiments combine genetic programming and the convolutional neural network in order to produce unique and interesting intelligent agents, as well as target specific behaviours. Results show that the system is able to evolve more innovative and effective agents that can operate in complex environments and could be extended to perform a wide range of tasks.
  • ItemOpen Access
    Evolving Weighted Networks to Simulate Epidemics and Lockdowns
    Sargant, James Robert; Department of Computer Science
    Simulating epidemics is vital to understanding their effect on human populations, and developing models to provide insights into epidemic behaviour is a primary goal of this thesis. A generative evolutionary algorithm is used to evolve weighted personal contact networks that represent physical contact between individuals, and thus possible paths of infection during an epidemic. The evolutionary algorithm evolves a list of edge-editing operations applied to an initial graph. Two initial graphs are considered, a ring graph and a power-law graph. Different probabilities of infection and a wide range of weights are considered, which improve performance over other work. Modified edge operations are introduced, which also improve performance. When attempting to match a given epidemic profile, similar results are obtained when using either initial graph, but both improve performance over other work. The impact of different lockdown strategies upon the total number of infections in an epidemic are evaluated for two models of infection: one in which the disease confers permanent immunity, and one in which it does not. The strategies are based upon the proportion of the population infected at a time in order to trigger lockdown, combined with the proportion of interactions removed during lockdown. The population, its interactions, and the relative strengths of those interactions are stored in a weighted contact network, from which edges are removed during lockdown. These edges are selected using an evolutionary algorithm (EA) designed to minimize total infections. Using the EA to select edges significantly reduces total infections in comparison to random selection. In fact, the EA results for the least strict conditions were similar or better to the random results for the most strict conditions, showing that a judicious choice of restrictions during lockdown has the greatest effect on reducing infections. Further, when using the most strict rules a smaller proportion of interactions can be removed to obtain similar or better results in comparison to removing a higher proportion of interactions for less strict rules.