This is a project I made in collaboration with Coding Club. This is a massive improvement to the old Poem Collector. Some of the new features can be seen below:
High-tier users have control over the accounts they create. At any time they can decide to ban users by navigating to "Manage Moderators" and clicking ban over the account of their choice.
This was a group project I was a part of for HOSA medical innovation. Our mission was to develop a machine and algorithm to detect melanoma present on skin. I played a huge part in developing the app as well as AI algorithm. Here are some pictures of the app I designed:
Coding Club project I led back in high school for my local library. Project is open source and is published on GitHub: https://github.com/Caeden01/West-Bloomfield-Library-Poem-Collector.
This was a team project I worked on for HOSA medical innovation. We developed a device and algorithm to help streamline and reduce the cost of diagnosing acute lymphocytic leukemia. Our team won 3rd place state level and 12th place internationally competing against approximately 5000 teams across the world. During the project, I played a huge part in developing both the AI model as well as physical prototype.
Our algorithm is built on a multi-stage AI architecture:
We trained our AI on several datasets from many different sources. Including:
We haven't had the chance to do much of an in depth look at our finished model but we found it achieved state of the art performance between multiple datasets and proved to be more reliable along edge cases where already existing models would fail.
I plan to open-source the Better Call Cell project in the near future. Be prepared for the drop!
Back in high school, a friend and I were trying to find a way to relate optical flow with AI estimated depth in order to try to make a more efficient and more accurate way to create a 3D visual map for a self-driving car we were working on. Here's sort of an explanation about what kind of math this site is covering.
3D points can be projected onto a 2D screen using the equation \[ S_p = \cot\left(\frac{\theta}{2}\right) \cdot \frac{(x, y)}{z} \] where \( x, y, z \) are the spatial coordinates relative to the camera, and \( \theta \) is the field of view. This equation is derived from the projection of a point inside a pyramid with depth \( z \) onto a plane at a depth of one unit from the apex of the pyramid. The formula is valid for points where \( z \) is in the range \( (0, \infty) \).
Optical flow is defined as the change in projected points, or \( \frac{dS_p}{dt} \). Now, let's express optical flow in terms of depth and movement in the \( x, y, \) and \( z \) dimensions, represented as \( \frac{dx}{dt} \), \( \frac{dy}{dt} \), and \( \frac{dz}{dt} \).
Our self-driving car uses an AI model to predict depth. Traditional AI models can only predict normalized depth because they cannot determine distances from a single image. However, we hypothesized that through optical flow, the nearest and farthest values within the range could be estimated, allowing us to denormalize the AI depth map to produce an actual depth map. Let's denote the farthest value as \( m_a \) and the nearest value as \( m_i \), and our normalized depth map as \( \alpha \). Denormalizing \( \alpha \) gives us \[ z = (\alpha + m_i) \cdot (m_a - m_i) \] Substituting this into \( S_p \) yields \[ S_p = \cot\left(\frac{\theta}{2}\right) \cdot \frac{(x, y)}{(\alpha + m_i) \cdot (m_a - m_i)} \]
Solving for \( \frac{dS_p}{dt} \) provides optical flow in terms of our normalized (AI-generated) depth map, which should match the optical flow derived from comparing the movement of projected points. By solving for \( \frac{dS_p}{dt} \) with respect to \( m_a \) and \( m_i \), we can use gradient descent to find \( m_a \) and \( m_i \), assuming that \( S_p \) defined from depth is equal to \( S_p \) defined by comparing the movement of projected points.
Our simulation experimented with this approach found that not only can \( m_a \) and \( m_i \) be determined, but they can also be accurately estimated even with infinite noise present in the reference gradient descent optical flow image.
This was an app I made for my AP CSP class back in 11th grade. It chooses what political candidate is best for you by asking a few questions. I only got a B- for the project unfortunately and it was like 200 points :(. I think my teacher was having a bad day. Try to get Bernie Sanders or Hillary Clinton those candidates are difficult to get! Please post on AwesomeScrewyLou Forums if you do! Also btw the section where you can donate for a candidate was designed to be a joke. You won't be able to donate to a candidate and no credit card numbers are collected.
I was trying to explore the geometry of transforming 3D space but instead of linearly like how Linear Algebra investigates I wanted to go with more of a non-linear approach. Here's an explanation of the mathematical concept this visualizer was built on.
Imagine you have a set of points \( P \in \mathbb{C} \) and a transformation function \( f(z) \). What will the set \( f(P) \) look like for all points? A good way to visualize this transformation is by examining the density of the distribution.
A derivative is defined as the limit of the average slope between two points as the distance between the points approaches zero, or formulaically as: \[ \lim_{a \to 0} \frac{f(x + a) - f(x)}{a} \]
Density is described as mass/volume or, in our case, points/area. Imagine a grid with one point placed at every single unit in both the X and Y directions. It makes sense to say the density is one point per unit squared because we have one point placed per every square unit. If you subdivided this grid and placed two points per unit in both the X and Y directions, the density would quadruple. For every unit squared, you now have four points.
Extending this idea to a set of all complex numbers, what would density look like? You suddenly have infinite points per unit squared, which might seem to break the concept. But let's make an assumption: for the set of all complex numbers with no transformations, let's assume the density is one. If we perform the transformation \( f(z) = 2z \), now all the points are placed twice as far from each other. Using the equation for density, points/area, we find we now have half the points per unit of area. We’ve essentially halved our density, resulting in a density of \( \frac{1}{2} \).
Now, let's extend this concept to a nonlinear function, starting with the set of all real numbers.
If we have \( P \in \mathbb{R} \) and transform each point using \( f(z) = z^3 \), what will our density look like? In 1D, density changes from points per unit squared to points per unit. So, we need to figure out the distance between each point.
We can reuse the definition of the derivative for this!
Imagine we want to find the density at \( x \). Inverting \( x \) through \( f^{-1}(z) \), we find that \( x \) would have been at \( f^{-1}(x) \) before the transformation; let’s call that \( x' \). Putting \( x' \) back into \( f(z) \) gives us \( f(f^{-1}(x)) \) or just \( x \). Since we’re solving for the density at \( x \), this makes sense.
Now, let’s imagine \( x \)’s neighbor. It would be at \( f(f^{-1}(x) + a) \), where \( a \) is our grid size. We can find the distance between \( x \)’s neighbor and \( x \) as \( f(f^{-1}(x) + a) - f(f^{-1}(x)) \). Now we have our distance. We just need to know the number of points, which is our grid size. So our density is \[ \frac{a}{f(f^{-1}(x) + a) - f(f^{-1}(x))} \] As we make the number of points infinite, the grid size approaches zero. Thus, the density at any point \( x \) is: \[ \lim_{a \to 0} \frac{a}{f(f^{-1}(x) + a) - f(f^{-1}(x))} \]
This is equivalent to the derivative of \( f(x) \) with respect to \( f^{-1}(x) \).
For complex numbers, this concept holds true, but we use the length or absolute value of the derivative of \( f(x) \) with respect to \( f^{-1}(x) \). Since we’re taking the inverse of a function, this is technically only valid for functions with an inverse.
This extends well to quaternions, which the simulator is simulating. Quaternions provide four dimensions, with the real part as time and the imaginary components as spatial dimensions. The simulator showcases the transformation \( f(z) = \frac{1}{z} \). This transformation produces a singularity as time ticks forward in the \( \frac{1}{z} \) domain. Points appear to split before eventually crashing into each other and annihilating themselves.
\( P \) represents density, and \( \Delta t \) represents the change in time in the simulator relative to the global, non-transformed quaternion domain. Sound is calculated by looking at the change in density and converting that to a sine wave. Since sound is the change in density, this makes sense.
I wanted to build a GUI way of building a neural network as opposed to coding it manually each time in PyTorch. I stopped this project after I realized it would be a lot quicker to just code in PyTorch each time I wanted a new AI model so working on this project made me really bored.
This was a project I started working on during COVID. I didn't really like Zoom so I wanted to make a 3D version that would be better in every single way. I stopped this project after I realized I was basically just making the Metaverse then I got bored plus it looked kinda ugly. I'll leave the Metaverse to Mark Zuckerberg.
I came across a video on how Korean has built an almost perfect writing system so naturally I wanted to see if I could train an AI to make one that’s even better. I built this algorithm from scratch in JavaScript and created a loss function based around line clarity. I forgot exactly how I implemented this algorithm but it looks like it's good at learning how to draw pristine lines but then struggles at effectively putting these lines together to form letters. Might come back to in the future idk
Around 2020, I became interested in neural networks and decided to build my own. In this project, I wrote a DNN from scratch to train on the MNIST dataset. As the model trained, its loss was plotted on a graph. As expected, the graph decreased exponentially until eventually converging close to zero. This project was a success!
This was a project I had where I planned on modeling the entire solar system in OpenGL using Java. Unfortunately, the entire project was never finished but there are some really beautiful shaders that ended up in the final build. Not all of them are in the demo but they're available somewhere in the source code. You can happily download the executable if you wish just note if you want to see the source code you'll need to decompile yourself because I originally lost my own copy :(
A remake of the previous FPS I made but coded from scratch using WebGL along with better graphics.
This was a project I started working on because my school pretty much blocked every single gaming website they could get their hands on. So in response, I make my own FPS using ThreeJS so my friends and I can play during class. It got a decent amount of attention around my school but unfortunately, my server at that time was pretty unreliable so the site crashed multiple times a day. It was only online for a couple hours a day so I ended up losing most of my traffic to krunker.io. Still a fun project though. I don’t really feel like maintaining a WebSocket server for an old project so I’m only hosting the single-player version at the moment.
Controls:
This was an attempt to recreate Super Mario Bros 3 from scratch in HTML5. I wanted to recreate the game as closely as I could to ensure the game mechanics felt exactly the same. I feel like I was able to accomplish it to a decent degree before the project kinda got dull and boring.
The controls for the game are:
When I was younger, I used to make virtual cards to celebrate my family members' birthdays/special holidays. It got to the point where I was about to turn this gift-giving pursuit into an actual business with the help of one of my friends. The name: Screwy Lou Digital. Meant to revolutionize the digital world, offering virtual cards with unparalleled creativity and customizability. Unfortunately, it never ended up becoming a reality but here are some cards I made for my grandpa and my dad.
This is a server managing tool I developed in middle school to manage the files on my website. It was essentially a browser-based operating system - similar to OS.JS which I was unknowingly competing against. Looking back Admin OS was definitely lacking some very important core functionality. One time, a friend was managing my website and accidentally deleted an important folder only to get greeted by the popup: “Sorry the trash can is just for decoration.” I couldn’t really blame him for that mistake — there wasn’t even a confirmation popup. I've gotten a lot better at programming since then.
This was a proxy I designed disguised as a note-taking website. I created it back in middle school to bypass our strict school restrictions on websites we could visit. The notepad windows act as a password portal. All you need to do is enter the correct keystrokes.
This was the first social media or networking site I made. I built the backend using PHP and MySQL. Reflecting on it now, I realize it definitely lacked certain security measures and was vulnerable to SQL injections. Thankfully the website didn’t get hacked while it was online in 2017 but I upgraded the code to prepared statements to upload it here — I’m not trying to get myself hacked lol. This forum site allowed anyone to make posts and comments but lacked features like a like/dislike button, account profiles, and DMs. It also lacked anti-spam functionality like Captchas. I remember being so excited to see people posting on my website for the first time only to learn that they were all bots :(
This was my first website I created as a supplement to my YouTube channel when I was just 10. The name “AwesomeScrewyLou” was inspired by my cat Louie. He was also used as the design for the logo. I updated the site up until around 2019-2020 when I took it down. This project marked my introduction to HTML, CSS, and JavaScript as well as programming in general.