Toggle for light and dark theme highlight effect for theme toggle
1

00:00:00:00

recording

Alexander Ellis-Wilson

Web Developer

T h a n k s   f o r   v i s i t i n g

Born 2002 in England, I believe computing should be accessible and safe for anyone. Graduated in 2024 from Bournemouth University with a BSc in Cyber Forensics and Security.

About me

The intersection of design and technology - wondering how things were made or work and what can be improved with them.

This fact about me drove me to pursue a career in technology and learn as much about its different aspects as possible.

Recently, I graduated from Bournemouth University with a BSc in Cyber Forensics and Security with a First Class with Honours, where I learnt about the malicious side of computing, developing a love for Machine Learning and earning a deeper knowledge of the fundamentals of computers and SIEM (System Information and Event Management).

Playing RPG games like World of Warcraft on my old Pentium laptop, started my fascination for programming and how it could be utilised to create complicated and nuanced things.

Eventually, I was introduced to Python where the spark was truly lit, as with only a simple set of rules, I created the very basics of a text-based adventure game. This trend would continue in college where I began experimenting with 3D graphics in P5.js and basic app development in Java and Android Studio, making the best of the COVID free time.

Tv static effect for image for more paranormal look
picture of me smiling outside with a japanese bridge in the background blurry duplicate picture of me smiling to give a feathered effect on the image on the page
Tv static effect for image for more paranormal look
picture of me smiling outside with a japanese bridge in the background blurry duplicate picture of me smiling to give a feathered effect on the image on the page

Experience

"A jack of all trades is a master of none, but oftentimes is better than a master of one." - William Shakespeare

Aug 2024 -- Present
Web Developer • Mercantile Trust
Working within the Norfolk Capital Group developing web based solutions to their internal issues and developing their websites in both the front-end and the back-end. Developing how their website looks and feels through the use of reactive elements and the use of current design trends and SEO principles.
  • HTML
  • CSS
  • SCSS
  • JavaScript
  • jQuery
  • Angular
  • Node.js
  • nUnit
  • .Net Core
  • ASP.Net
  • MVC
  • Umbraco 13-14
Jun 2022 -- Jan 2024
Placement Fullstack Developer • thehumantech Agency
Supporting clients daily through the support queue, building reactive CMS first web pages with a variety of frontend JS fameworks and developing blocks and tools in ASP.Net and .NET Core.
  • HTML
  • CSS
  • SCSS
  • JavaScript
  • TypeScript
  • Angular 14
  • Node.js
  • nUnit
  • .Net Core
  • ASP.Net
  • MVC
  • Umbraco 7-13
  • Sitecore

Think your company should be on here?

Get in touch!

Previous Projects Page 1

On mobile? try pressing and holding the CD instead of just tapping 😁

Previous Projects Page 2

On mobile? try pressing and holding the CD instead of just tapping 😁

svg stating more coming svg stating more coming

Contact me

PhishTackle-API

PhishTackle-API

What is PhishTackle-API?

PhishTackle-API is a combination of Machine Learning and Cyber Security phishing detection. PhishTackle-API is a restful API created as the actionable artefact of my dissertation final project to demonstrate the effectiveness of Machine Learning, its implications within Cyber Security and its capability to expand on existing security solutions without compromising pre-established security integrity.

Utilising the processing power of Machine Learning alongside the lightweight framework of Node.js restful applications, a restful API was created with an accompanying web application capable of analysing high-level natural language for traits indicative of phishing attacks.

Development

The Models

PhishTackle-API models were created using the Scikit-Learn python library for training 2 Machine Learning models trained on email, text message content and URLs from a variety of sources, including phishtank.org, and the in-depth evaluation of different meta-heuristics (stages of preprocessing that indicate how a model should interpret data) to create a tool capable of quickly analysing text-based data and producing a binary classification result and float representation of its certainty.

Models were evaluated by first defining and ranking a series of evaluable data points for their effect on the final effectiveness of the models. After compiling a list of evaluable points an Excel Sheet was created to quickly analyse and rank the different initial models and their meta-heuristic parameters.

Model Analysis Vector Rankings and Their Effects on the Final Model

IDStatisticEffect on the Final ModelImportance
1Minimising False PhishingReducing False Phishing statistics ensures safe emails are not being classified as phishing.2
2Minimising False SafeReducing False Safe statistics ensures phishing emails are not incorrectly classified as safe.1
3Maximising model “score”Score is an automatically calculated accuracy that calculates total data correctly labelled divided by total data checked. Unbalanced datasets may produce high scores by always assuming the same result.11
4Maximising Phishing PrecisionMaximising phishing Precision ensures data that is labelled as phishing is in fact phishing. This may lead to many phishing data values not being classified as such but all values that are labelled as phishing are correctly labelled.4
5Maximising Phishing RecallMaximising phishing Recall ensures all data that should be labelled as phishing is labelled as phishing. However, this may lead to many safe data values being labelled as phishing.3
6Maximising Safe PrecisionMaximising safe Precision ensures data that is labelled as safe is in fact safe. This may lead to many safe data values not being classified as safe but all values that are labelled as safe are correctly labelled. This could cause many safe emails to be blocked by existing security systems utilising this model.6
7Maximising Safe RecallMaximising safe Recall ensures all data that should be labelled as safe is labelled as safe. However, this may lead to many phishing data values not being caught by the model.5
8Maximising F1 ScoreF1 score is an indicator of how healthy a model is. This does not take into account if recall is being sacrificed for precision and so if one of the 2 values were low, F1 would reflect that even if the other is 100%.7
9Maximising Phishing AccuracyMaximising phishing accuracy would indicate all phishing data is being caught and processed correctly. However, this may ignore overfitting where all data is considered phishing.8
10Maximising Safe AccuracyMaximising safe accuracy would indicate all safe data is being analysed and processed correctly. However, this may ignore overfitting where all data is considered safe.9
11Maximising Cumulative Model AccuracyCumulative model accuracy is the mean value of the individual label accuracies and shows a best representation of if one accuracy is overfitting. For instance, if one label had a 100% accuracy and the other was 0% the cumulative model accuracy would be 50% as the model only gets half the data right.10

After the creation and analysis of the different models, the skl2onnx library is used to convert trained models into the .onnx format capable of implementation and use within the Node.js OnnxRuntime environment which analyses new data matching the expected input vectors without the need for re-training, creating a lightweight Machine Learning program capable of handing processing on a separate server instead of the end users’ system.

The API

The API was developed in Node.js and Express.js following RESTful API development standards outlined on Stack Overflow. To ensure these standards were followed a strong foundation was required, I first created a generic restful API that could handle GET, POST, PATCH and DELETE requests and retrieve and update data from a simple JSON database. This was to ensure standards were established early in development and when this project was started, the end-user interactions with the project had not been completely established as there was no end client to provide goals or expected usage for the end product.

To establish the API's pathways and final uses, an internal API Use Flow was created demonstrating how data should be handled and passed to relevant functions. This was an important step as all 3 paths to this project were completed simultaneously and would better allow a skeleton to be created before the Machine Learning models were finished continuing development during larger processing or training periods.

The Accompanying Web App

For the purposes of the outlined project, the web application was not intended as a finalised product but instead as a slimmed-down demonstration of how the API could be used and as a demonstration of the effectiveness of the trained models, so the web application is designed to be as simple to use as possible and display API responses for analysis.

After the completion of the basic web app and its initial connection to the API, a final Use Flow was created to plan how the API should handle requests before any processing is performed, this was completed to add some further structure to the API and its request handling and to insert quality of life functionality to the API, including a caching policy and a rate-limiting function.

Umbraco CMS

What is Umbraco

Umbraco is a content management system (CMS) utilised for the creation of block based websites capable of being used by content creators with minimal tech experience to create connected looking websites. I first encountered Umbraco working thehumantech agency starting on Umbraco 7, fixing existing website bugs and building blocks in an MVC development environment.

At thehumantech agency, I worked on all versions of Umbraco up to Umbraco 12, building blocks in ASP.NET and .NETcore, and backend services integrating Umbraco functionality.

Tools Created

Umbraco Media Parser

The Umbraco Media Parser is a simple Bash shell Program created for Linux machines capable of searching through files for images and compressing them to 80% of their original size.

The program works by having a specified folder containing the Umbraco media directory, a directory of folders with randomised names which each contain a single media file.

How it works:

flow chart depicting the stages of the umbraco media parser
  1. First, the program checks the created "Read_contents" file for a nested folder
  2. If a folder is found, the program checks the first item in the folder
  3. For each item in the folder, the program checks to see if the item is a file or folder assigning a value of "shallow" or "deep" depending on the result of the check
  4. If a folder is found the program loops through the items of the nested folder for files
  5. When a file is found within the folder, the program runs the "checkImg" function with a path to the file and the "deep" or "shallow" tag
  6. Within the "checkImg" function the provided file's file extension is checked for:
    • .jpg
    • .jpeg
    • .png
    • .webp
    • .tif
    • .tiff
  7. After finding an eligible file, a variable is created with the file’s path and a console message is created informing the user that the file is being optimised
  8. Finally, the program runs the "convert file -quality 80% fileoutput" command squishing the file size to around 80% of the original size and outputting a "file completed message"

This is then looped for each file within the folders replacing the old images with the new ones, not changing file extensions or folder names so the media can be easily replaced with no further development required.

Automated Index Rebuilder

The Automated Index Rebuilder is a backend service created to detect common anomalies with Umbraco indexes such as Null values found and empty columns which could cause issues when accessing and manipulating the data within the index. This program is set to run periodically every 30 minutes on the first run of the deployed Umbraco website using Umbraco's built-in IIndexDiagnostics functionality.

The Index Health Checker works by first adding the IndexHealthCheckerTask function to the Hosted Services as a recurring task instantiating the IRuntimeState, IExamineManager, IProfilingLogger, ICoreScopeProvider, IIndexDiagnosticsFactory, IIndexRebuilder and ILogger dependencies.

Next, the service ensures the site is up and creates a scope to ensure changes are completed within 1 transaction.

The Health Checker then receives a list of examine manager indexes being used on the site and checks if the index is not null and has a usable name. After ensuring the index is valid, an information log is created, stating which index is being health-checked before sending the desired index name to the IndexMaintainance function.

After sending the Index Key to the IndexMaintainance function, the index is retrieved with the examineManager as an IIndex object which is then converted to an IIndexDiagnosis object for its access to diagnostic tools.

Following the creation of the IIndexDiagnostics object, the Index document count and its health status are checked. Then these values are used together to create a boolean value for the index's health status comparing the result of the isHealthy method and the document count is greater than 0.

If the indexes are found to be unhealthy a traceDuration log is made stating the index's are being repaired and the unhealthy index key is passed to the "RebuildIndex" function otherwise, if the index is healthy, an information log is created stating the “Index was not rebuilt” and attached is the document count and health status of the index.

Within the RebuildIndex function, a boolean is created ensuring the index can be rebuilt, if this is not possible a Warning log is created, stating the index was unable to be rebuilt along with the health status when checked and the document count of the index.

If the index can be rebuilt, it is passed into the IIndexRebuilder.RebuildIndex() function and again checked for document count and health status.

If the index is healthy an information log is made stating the index was successfully rebuilt and is healthy with the updated document count and health status included, otherwise a warning log is made stating an index was attempted to be rebuilt but failed with the attached statistics of the index.

The above process is repeated for all indexes that aren’t excluded from the function and finally, the scope is completed and a Task.TaskCompleted response is made triggering the timer for the next loop.

Flow chart of how the automated index rebuilder works

Canonical URL Composer

The Canonical URL Composer was created to solve a missing feature in Umbraco 11 where you could no longer create canonical or replacement URLs for web pages with long paths.

Created as a backend composer in C#, it intercepts page requests to the server. This is done by assigning a new step to the project's builder, UrlSegmentProviders, and adding a new step to when pages are published, checking a new field on Umbraco Pages called "cannonicalUrl".

The first part of the function takes in request page paths and analyses the final segment of the path against the requested page’s content checking for a filled-in "cannonicalUrl" field which if found, updates the last segment of the requested URL to the desired path allowing a seamless and more readable resultant page path.

The second and more complicated part of this composer runs whenever a page is published updating Umbraco's list of recognised URL paths for a given piece of content:

  1. First, the function retrieves the ID and URL of the given content using the ID of the page to retrieve the IPublishedContent with the Umbraco "TryGetUmbracoContext()" to create an umbracoContext accessor and "content.GetById()" function to receive the page’s content function
  2. Then, after ensuring the content is not null, the field data is attempted to be retrieved from the "cannonicalUrl" field added to every page through compositions which are evaluated for any content
  3. If no content is found within the "cannonicalUrl" field, the base URLs are returned. In the event content is found, regex is used on the name of the Umbraco page to mimic the existing transformation Umbraco performs on page names
  4. The regex edited name segment and the cannonicalUrl field value are then stored in a variable with the value being the umbracoContent.Url() path with the name transform replaced with the cannonicalUrl field data to be used later
  5. Finally, the cannonicalUrl field data is checked for a starting "/", which if not provided is added to the variable. The original URL for the page and the created URL (as long as the created URL contains valid content) are added to an array of UrlInfo objects and is returned updating Umbraco's list of valid URLs for the published content.
Flowchart of how canonical URL composer works

Techne Diversity

The craft of research in the arts and humanities

What is the Diversity Hub?

The Techne Diversity Hub is a website created by technē to aid in funding and support to PhD students working in Arts and Humanities disciplines. The Diversity Hub focuses on supporting students of colour aiming to improve inclusion issues found in postgraduate research schemes.

ThisThis project was important to me due to the client’s message and as one of the first projects I worked on after my education on the basics of ASP.NET, MVC and CMS development principles. Working on this project, I joined the team of developers at thehumantech agencywith whom I learned besides and befriended, no longer working on pre-existing websites and bug fixes with the occasional new block, I was working on a work-in-progress site building core functionality.

What did I do?

Joining the Technē internal team, I was tasked with smaller fixes and client requests such as, images not appearing if they were too large for the JavaScript to detect them as being in frame and all of the applications being labelled as being created by the lead developer (even if they were not!). Through communication with their account manager, I fixed these bugs and ensured I presented an up-to-date visual on their website's progress.

After completing these tasks, I was allowed to start building core functionalities for the website, an opportunity I could not pass up. I built an email function in the C# backend to inform mentors when a new mentee application was created, with the contact details provided from the application and to inform mentees when their application is responded to. Afterwards, I worked on an Umbraco redirection component, built using Umbraco compositions and non-specific code which allowed i's use across different Umbraco 11 websites using minimal code changes to the .cshtml files. The module worked by allowing the optional use of an end date for the redirect and optional redirection page, which if not provided would default to directing to a generic "This page is not available, this page will be available on DD/MM/YYYY" page which with minimal styling could comfortably fit in with any existing sites.

Wessex Internet

Who is Wessex Internet?

Wessex Interet is a company dedicated to giving full-fibre internet to neglected areas such as the countryside, at a fast speed and an affordable price. Wessex Interntet works with the government to ensure no matter how rural, the internet is accessible wherever you are, working to chip away byte by byte to the digital divide.

During my time at thehumantech agency, Wessex Internet was one of my consistent clients, always looking to update their site and open to new ideas and recommendations to improve their website. They have an in-house development team that worked to create and maintain an API used within their order journey that was provided to the team at thehumantech agency with documentation for a joint development project, I was given the privilege to work with 2 front-end developers.

Work Completed with Wessex Internet

Working with Wessex Internet, I completed many small projects alongside 2 intensive ones. First, was the development and maintenance of blocks and site styling across their website, with the creation of settings toggles for their created blocks, that allowed simplified adjustments to be made to existing blocks such as switching the image from left to right without the creation of a completely new block, creating a more streamlined end-user experience, the goal of any CMS development.

The Development of a New Site Navigation

One of the bigger tasks completed with Wessex Internet waswas the creation of a new site navigation for their website, one of the first solo projects I completed while working at thehumantech agency. The original navigation was a relic of the site’s original redesign and was not optimised for the different links they would eventually want on their navigation, this meant that the existing one was not responsive to smaller screen sizes as there was no hamburger menu alternative for mobile screen sizes. It no longer satisfied the client’s needs and was completely hard-coded, not utilising the generic settings page created within the Umbraco CMS specifically for creating data accessible from all pages.

This was solved with the inclusion of new fields into the site settings page for each navigation main link and optional sub-navigation links, the redesign of the existing navigation to a more minimalist and modern design and the creation of styling breakpoints capable of adjusting the site's navigation to work with all screen sizes.

Image showing desktop view of site navigation with a dropdown highlighted and not highlighted
Image showing mobile view of site navigation with a dropdown highlighted and not highlighted

The Development of a New Order Journey

The coup de grâce of my work with Wessex Internet culminated with the development of a new order journey to replace the old Umbraco package-based order journey which was producing lower-than-expected conversion rates and was difficult to keep up to date with their internal packages, vulnerable to tampering and reliant on legacy code following outdated web development principles.

For the development of the new order journey, I was part of a team of 2 other developers tasked with the creation of an Angular based order journey, aiming to use the flexibility of the framework to build alongside Wessex Internet's in-house development team utilising their in-development API to create an easy to use, reactive order journey. Throughout development, weekly meetings were held with Wessex Internet and its developers to ensure the project's progress was going smoothly and issues encountered on either side were recorded in an open document accessible by both teams.

Throughout development, a combination Scrum and Feature Based development lifecycle was followed with the account manager and project team working in tandem to produce realistic sprints and carve out an achievable timeline with actionable and functional objectives capable of demonstrating to the client allowing streamlined communication between the teams, which facilitated the creation of a product the clients were immediately satisfied with.

Image showing the Wessex Internet order journey box on the home page

OneWelbeck Healthcare

Who is OneWelbeck?

OneWelbeck is a private healthcare company based in London, England, who with its wide variety of specialist centres and consultants, provide healthcare advice, consultations and treatments. Working with OneWelbeck at thehumantech agency, I was part of the initial team developing their new website for their redesign and a many features and stylistic changes.

Creation of Heros and other Misc Blocks

Working with OneWelbeck and the team for the site's creation, I aided in the development and integration of blocks across the site into Umbraco’s CMS and backend of the site including heroes, award blocks, banners and accordions.

Page Themes

OneWelbeck wanted themes for each of their specialities that would be visible throughout their website on all related content including, speciality centres, conditions, treatments, consultants and a blog.

To this purpose a new dropdown field was added to all applicable content in Umbraco called “speciality centre”, this speciality centre field was integrated into the backend of the website’s controllers for each model type and the .cshtml of each component. This allowed the extraction of the Umbraco field dropdown data adding the desired colours to the content, such as wine red accents to heart health pages.

Website Searches

Finally, after the website’s redesign, changes were requested for the website’s multiple searches in how results were returned from the examine manager, utilising linq statements for reducing result array length and specialising results for filters and lossy and lossless results.

For the changes requested on the consultant search, new linq statements were constructed for re-organising result arrays to the desired orders including:

  • randomising consultants to ensure all consultants had equal opportunities to be seen
  • the use of new functions for text searches analysing if the provided search data was more than one word as if only one word was provided to the search
  • lossy results should be returned using regex for retrieving results that contain partial matches to the word
  • lossless results are returned for multiple-word searches where only exact matches are returned, this was done to aid in searching for specific specialities or consultants.