1
0
mirror of https://gitlab.com/Anson-Projects/projects.git synced 2025-06-15 14:36:47 +00:00

Add alt text and meta descriptions to every document

This commit is contained in:
Anson 2024-11-24 20:16:35 +00:00
parent a5d831884a
commit f2c52e4b4b
8 changed files with 38 additions and 28 deletions

View File

@ -1,8 +1,7 @@
---
title: "How to Export Fusion 360 files to Other File Types"
description: |
How to export Fusion 360 Files and detailed explanations of what each exported file type does.
Export your Fusion 360 designs with ease. Learn how to access hidden export options and convert your models to various file types, including STL, OBJ, STEP, and more. Discover which format is best for 3D printing, CAD compatibility, and other uses.
date: 2018-09-12
date-modified: 2024-02-29
categories:
@ -24,7 +23,7 @@ Clicking the download button provides you with a ton more options than the deskt
Below I have a list of every file-type that can be exported and a quick summary of what they are good for.
![List of Export Options](websiteExport.png)
![List of Export Options](websiteExport.png){fig-alt="A screenshot of the export options in Fusion 360. The options are Fusion 360 Archive, Inventor 2016, IGES, SAT, SMT, STEP, DWG, DXF, STL, FBX, SketchUp, and OBJ."}
## Fusion 360 Export Formats

View File

@ -1,8 +1,9 @@
---
title: "Air Propulsion Simulation"
description: |
Simulating the performance of an air propulsion system as an alternative to solid rocket motors.
Simulating the performance of a compressed air propulsion system as an alternative to solid rocket motors using Julia.
description-meta: |
Simulate air propulsion for lunar mining transport! This project explores using compressed air as an alternative to solid rocket motors. See the Julia simulation results and comparisons with traditional rocket motor performance. Explore the code and learn more about this innovative approach.
repository_url: https://gitlab.com/lander-team/air-prop-simulation
date: 2021-04-01
date-modified: 2024-03-10

View File

@ -1,8 +1,7 @@
---
title: "ISS Eclipse Determination"
description: |
Determining how much sunlight a body orbiting a planet is receiving.
Calculate sunlight exposure for orbiting spacecraft like the ISS. This Julia project demonstrates how to determine eclipse times, considering umbra, penumbra, and full sunlight. Visualizations and code included. Learn about orbital mechanics and mission design considerations.
date: 2021-05-01
date-modified: 2024-02-292
categories:
@ -27,11 +26,11 @@ Determining the eclipses a satellite will encounter is a major driving factor wh
## What is an Eclipse
![Geometry of an Eclipse](geometry.svg)
![Geometry of an Eclipse](geometry.svg){fig-alt="A diagram of an eclipse. The sun is shown as a large yellow circle on the left. A smaller blue circle labeled "Body" is to the right of the sun. A spacecraft is shown above the body. The umbra and penumbra are labeled."}
The above image is a simple representation of what an eclipse is. First, you'll notice the Umbra is complete darkness, then the Penumbra, which is a shadow of varying darkness, and then the rest of the orbit is in full sunlight. For this example, I will be using the ISS, which has a very low orbit, so the Penumbra isn't much of a problem. However, you can tell by looking at the diagram that higher altitude orbits would spend more time in the Penumbra.
![Body Radius's and Position Vectors](vectors_radiuss.svg)
![Body Radius's and Position Vectors](vectors_radiuss.svg){fig-alt="A diagram expanding on the last figure, but with distances marked for the radii of the sun and body, and the distance between the body and spacecraft."}
Here is a more detailed view of the eclipse that will make it easier to explain what is going on. There are 2 Position vectors, and 2 radius that need to be known for simple eclipse determination. More advanced cases where the atmosphere of the body your orbiting can significantly affect the Umbra and Penumbra, and other bodies could also potentially block the Sun. However, we will keep it simple for this example since they have minimal effect on the ISSs orbit. <code style="color:#0b7285">Rsun</code> and <code style="color:#c92a2a">Rbody</code> are the radius of the Sun and Body (In this case Earth), respectively. <code style="color:#5f3dc4">r_sun_body</code> is a vector from the center of the Sun to the center of the target body. For this example I will only be using one vector, but for more rigorous eclipse determination it is important to calculate the ephemeris at least once a day since it does significantly change over the course of a year. The reason that I am ignoring it at the moment is because there is currently no good way to calculate [Ephemerides](https://ssd.jpl.nasa.gov/?ephemerides) in Julia but the package is being worked on so I may revisit this and do a more rigorous analysis in the future. <code style="color:#5c940d">r_body_sc</code> is a position vector from the center of the body being orbited, to the center of our spacecraft.
@ -112,6 +111,8 @@ The `sunlight` function returns values from 0 to 1, 0 being complete darkness, 1
```{julia}
#| code-fold: true
#| fig-cap: ISS Sunlight
#| alt-text: A graph titled "ISS Sunlight Over a Day" showing the percentage of sunlight the ISS receives over a 24-hour period. The x-axis represents time in hours, and the y-axis represents sunlight percentage. The graph shows a period of near-total sunlight followed by a period of darkness, and the cycle repeats.
# Get fancy with the line color.
light_range = range(colorant"black", stop=colorant"orange", length=101);
light_colors = [light_range[unique(round(Int, 1 + s * 100))][1] for s in S];

View File

@ -1,8 +1,7 @@
---
title: "Continuous Integration and Systems Engineering"
description: |
CI is helpful for more than running tests and pushing code to production. My team is currently using it to build continuous reports of our code to keep everyone updated with its status.
Streamline systems engineering projects with Continuous Integration (CI). This post explains how CI can generate automatic reports, keeping teams informed on complex projects. See a real-world example of how a student capstone team used CI to bridge the gap between software and systems engineering, improving communication and progress tracking. Includes a link to the project repository.
date: 2021-10-04
date-modified: 2024-02-29
categories:

View File

@ -2,7 +2,8 @@
title: "Notes on Nano"
description: |
Nano (Ӿ) is a fast, feeless and severely underrated currency.
description-meta: |
Explore Nano (XNO), a fast and feeless cryptocurrency. This post examines Nano's unique block-lattice architecture, its potential as a true digital currency, and compares it to other cryptocurrencies like Bitcoin, Ethereum, and Ripple. Learn about Nano's advantages, challenges, and its potential for mainstream adoption. Includes data visualizations and links to further resources.
date: 2021-12-08
date-modified: 2024-02-29
categories:
@ -28,19 +29,21 @@ The coin I want to talk about is Nano (XNO). As of writing, it is ranked #175 by
<!-- ![Data from [@kraken_deposit] and [@coingecko]](caps.svg) -->
:::{.column-body-outset}
<figure>
<embed
type="text/html"
src="caps.html"
width="100%"
style="height: 60vh"
/>
<figcaption>Data from [@kraken_deposit] and [@coingecko]</figcaption>
</figure>
:::
<figcaption>Data from [@kraken_deposit] and [@coingecko]</figcaption></br>
Transactions must happen in seconds for a currency to be usable. Above are all the coins listed on Kraken, along with their confirmation time. Confirmation time is the amount of time Kraken recommends waiting before a transaction can be trusted not to be reversed on the blockchain. Kraken's times are conservative, but I believe them to be a trusted authority on the topic. It's important to note that most of the coins on this list are not designed with buying coffee in mind, but I think the comparison is still relevant since it is commonly believed that the only thing holding Bitcoin and its derivatives back from being used everywhere is hitting a critical mass of adoption among retail. Bitcoin is making progress, but Bitcoin today is far from being used as a currency.
![Data from [@coingecko]](fast.svg)
![Data from [@coingecko]](fast.svg){alt-text="A bar chart titled "Market Caps of Fast Coins" showing the market capitalization of various cryptocurrencies. The x-axis lists the cryptocurrencies ripple, nano, serum, raydium, oxygen, kava, stellar, icon, cosmos, and solana. The y-axis represents market cap in billions of USD. Each bar is colored according to the cryptocurrency's category: Currency, DeFi, Parachain, and Smart Contract. Solana has the largest market cap, followed by ripple."}
Above are all of the coins capable of settling in less than 1 second, which is about on par with a credit card. I also believe that confirming in less than a second is a benchmark that must be hit to be useful for everyday transactions. Solana stands out the most since it is the only smart contract platform on the list and has the highest market cap. Solana has a ton of potential, but I don't believe that a coin with a smart-contract platform can compete long-term with a coin specifically designed to be a currency. The DeFi and Parachain coins are more governance tokens than actual cryptocurrencies, so they aren't designed with the scale required for use as a currency. That leaves Ripple and Nano, which have wildly different market caps. Ripple is quite controversial since it is centralized to the point that I would argue it's closer to Visa than Bitcoin^[@Jain_2021]. I believe a fraudulent organization runs Ripple, so I don't want to justify it by talking about it too much here. However, it would be biased not to compare the performance and tokenomics since they are the only two serious projects with the goal to become a currency.

View File

@ -80,12 +80,12 @@ The materials and construction methods used to create satellites are changing ra
Below in @fig-comparison, you can see an example of the current decision tree method used by DebriSat alongside the advanced 3D scan data science pipeline that we propose. Our method utilizes complex analysis only possible with a high-resolution scan of the models and then uses a variety of machine learning and data science techniques to process the data into useful metrics. Our method is a modern approach that can eventually be developed into complex simulations of debris.
![DebriSat Approach versus Our Approach](uripropcomparison.svg){#fig-comparison}
![DebriSat Approach versus Our Approach](uripropcomparison.svg){#fig-comparison fig-alt="A flowchart comparing two approaches to debris satellite analysis. The Traditional Approach begins with inspecting the object and determining if it is flexible. If yes, it is labeled FLEXIBLE. If not, it is checked if one axis is significantly longer than others. If yes, it is labeled Bent Plate. If not, the process continues to other classifications. The Our Approach imports scanned geometry into MATLAB, derives data such as moment of inertia, center of mass, aerodynamic drag, density, and material from the 3D scans, and processes the data in a machine learning pipeline to determine impact lethality, accurate orbit propagation, and predictions for future impacts."}
Enough samples to get the project started have been provided by Dr. Madler, but as of now, they are entirely uncharacterized. The first step towards characterizing the debris we have is to manually organize them into different clusters. The clusters are based on similar characteristics that can be observed visually to produce a preliminary characterization and are just meant to be a starting point for the MATLAB code. Then three to five samples from each cluster will be scanned to give a somewhat even distribution of what we expect MATLAB to provide for each cluster. When clustering using machine learning methods, every cluster must have a few pieces to ensure minimal outliers in the data. As more data becomes available, the machine learning methods get more powerful. Before being put into MATLAB, every scan will be uploaded into CATIA to take data from the scans and clean up the model. CATIA makes some of the desired characteristics of the debris samples, such as the moment of inertia, the center of gravity, etc., very easy to collect. Future iterations of this project will likely do all processing in MATLAB to reduce the manual labor required for each piece of debris.
Below in @fig-debris is a render of a real piece of debris scanned by the Rapid Prototyping Lab on campus. Even after reducing the number of points provided by the scanner, the final model has over 1.2 million points, which is an impressive resolution given that the model is only a few inches in length on its longest axis. With debris created by hyper-velocity impacts having such complex shapes, it becomes clear almost immediately that the geometry is far too complex for any sort of meaningful characterization by a human without machine learning techniques. This issue is compounded by the fact that satellites comprise many exotic materials. The DebriSat program uses a simplified satellite to reduce costs, and it still comprises 14 different categories of materials where a category is primarily a way to determine how dense the material is [@cowardin2019updates] and not for each unique material. This also means that the shapes vary wildly since PCBs, wires, batteries, and the aluminum structure reacts entirely differently to a hypervelocity collision. The example in @fig-debris, and every piece of debris we have at our disposal, are from a hypervelocity impact involving aluminum sheet metal. A dataset of one material type is beneficial at this point since our dataset is still small; it makes sense to start our characterization with a single type of debris.
![Sample Debris Scans](long.png){#fig-debris}
![Sample Debris Scans](long.png){#fig-debris fig-alt="A 3d renderr of a thin piece of metal with a lot of bends."}
Our data collection process gives us much more data than the traditional methods, so machine learning is required to make sense of the data. The first step towards processing our data once it has been tabulated into MATLAB is to perform a principal component analysis (PCA). Utilizing PCA has two significant benefits at this stage of the project in that it reduces the required size of our dataset and decreases the amount of computational power to process the dataset. Reducing our datasets dimensionality will allow us to derive what aspects of the orbital debris are truly important for the classification. This may be easy for a human to discern at this stage of the project, but the DebriSat database has almost 200,000 pieces of debris cataloged [@carrasquilla_debrisat_2019], so it is essential to start with an approach that is adaptable to big data and is robust enough to handle the metrics we are trying to classify that are very complex. Once PCA has reduced the dataset, it can be clustered using the k-means method. K-means is a method of categorizing large, complex datasets using pattern recognition. Depending on which insight we are looking for, k-means could produce a valuable result, or it could be a step to much more advanced machine learning methods of analysis.

View File

@ -2,6 +2,8 @@
title: "Machine Learning Directed Study: Report 1"
description: |
Orbital Debris Characterization using K-means clustering.
description-meta: |
Characterizing orbital debris with machine learning. This project uses k-means clustering in MATLAB to analyze 3D models of satellite parts, simulating debris analysis. Explore data preparation, visualization, and initial clustering results. Learn about the challenges of orbital debris characterization and future research directions.
repository_url: https://gitlab.com/orbital-debris-research/directed-study/report-1
date: 2022-02-14
date-modified: 2024-02-29
@ -21,7 +23,7 @@ freeze: auto
To get started on the project before any scans of the actual debris are made available, I opted to find similar 3D models online and process them as if they were data collected by my team. GrabCad is an excellent source of high-quality 3D models, and all of the models have at worst a non-commercial license making them suitable for this study. To start, I downloaded a high-quality model of a 6U CubeSat, which coincidentally enough was designed to detect orbital debris. This model consists of 48 individual parts, most of which are unique.
![CubeSat Used for Analysis](assembly.jpg)
![CubeSat Used for Analysis](assembly.jpg){fig-alt="A 3D model of a satellite with deployed solar panels. The satellite bus is a rectangular structure with exposed framework. The solar panels are arranged in a cross-like configuration, with blue and gold panels. The model is shown against a light gray background with a reflective surface."}
## Data Preparation
@ -42,15 +44,17 @@ Physical
The entire file of the compiled parts properties from Fusion 360 can be seen [here.](https://gitlab.com/orbital-debris-research/directed-study/report-1/-/blob/main/compiled.csv) This method gave 22 columns of data, but most of the columns are unsuitable for the characterization of 3D geometry. The only properties considered must be scalars independent of a model's position orientation in space. Part of the data provided was a moment of inertia tensor. The tensor was processed down to $I_x$, $I_y$, and $I_z$, which was then used to calculate an $\bar{I}$. Then bounding box length, width, and height were used to compute the total volume that the object takes up. In the end, the only properties used in the analysis of the parts were: mass, volume, density, area, bounding box volume, $\bar{I}$, and material. Some parts also had to be removed because the final dataset is 44 rows and 7 columns. Below is a _Splom_ plot which is a great way to visualize data of high dimensions. As you can see, most of the properties correlate with one another.
:::{.column-body-outset}
<figure>
<embed
type="text/html"
src="prepped.html"
width="100%"
style="height: 40vh"
style="min-height: 50vh"
/>
<figcaption>[View Plot as an image](prepped.svg)</figcaption>
</figure>
:::
<figcaption>[View Plot as an image](prepped.svg)</figcaption></br>
Now that the data is processed and clean, characterization in Matlab can begin. The original idea was to perform _PCA_, but the method had difficulties producing meaningful results. This is likely because the current dataset is tiny for machine learning and the variation in the data is high. The application of _PCA_ will be revisited once the dataset grows. The first step for characterization is importing our data into Matlab.
@ -74,20 +78,22 @@ histcounts(idx) =
Then plotting Volume vs. Mass using our clusters produces the following plot. These make intuitive sense, but it is clear that the dataset needs much more data for <strong style="color:#00ff00;">Cluster 3</strong>.
![Volume and Mass clusters](clusters.svg)
![Volume and Mass clusters](clusters.svg){fig-alt="A scatter plot displays the correlations between several satellite properties, including mass, volume, density, area, box size, Ibar, and material index. Each panel illustrates the relationship between two properties, with the x and y axes representing the different properties, and blue dots marking the data points."}
Below is another _Splom_, but with the clusters found above. Since the _k-means_ only used Mass and Volume to develop its clusters, some of the properties do not cluster well against each other. This is also a powerful cursory glance at what properties are correlated.
:::{.column-body-outset}
<figure>
<embed
type="text/html"
src="prepped_clustered.html"
width="100%"
style="height: 40vh"
style="min-height: 50vh"
/>
<figcaption>[View Plot as an image](prepped_clustered.svg)</figcaption></br>
</figure>
:::
<figcaption>[View Plot as an image](prepped_clustered.svg)</figcaption></br>
## Next Steps
@ -95,4 +101,4 @@ The current dataset needs to be grown in both the amount of data and the variety
Once the dataset is grown, more advanced analysis can begin. PCA is the current goal and can hopefully be applied by the next report.
Check out the repo for this report for all the code and raw data. https://gitlab.com/orbital-debris-research/directed-study/report-1
Check out the repo for this report for all the code and raw data. [https://gitlab.com/orbital-debris-research/directed-study/report-1](https://gitlab.com/orbital-debris-research/directed-study/report-1)

View File

@ -2,6 +2,7 @@
title: "Machine Learning Directed Study: Report 2"
description: |
Advanced processing of 3D meshes using Julia, and data science in Matlab.
description-meta: This report details advanced 3D mesh processing for orbital debris characterization using Julia and MATLAB. It covers data gathering from online 3D models, data preparation using a custom Julia library for efficient property extraction, and characterization using Principal Component Analysis (PCA) and k-means clustering. The report also includes visualizations and discusses next steps for dataset expansion and property derivation.
repository_url: https://gitlab.com/orbital-debris-research/directed-study/report-2
date: 2022-04-03
date-modified: 2024-02-29
@ -25,7 +26,7 @@ excellent source of high-quality 3D models, and all the models have, at worst, a
license making them suitable for this study. The current dataset uses three separate satellite
assemblies found on GrabCAD, below is an example of one of the satellites that was used.
![Example CubeSat Used for Analysis, @interfluo6UCubeSatModel](Figures/assembly.jpg)
![Example CubeSat Used for Analysis, @interfluo6UCubeSatModel](Figures/assembly.jpg){fig-alt="A 3D model of a satellite with deployed solar panels. The satellite bus is a rectangular structure with exposed framework. The solar panels are arranged in a cross-like configuration, with blue and gold panels. The model is shown against a light gray background with a reflective surface."}
## Data Preparation
@ -47,7 +48,7 @@ project. The characteristic length takes the maximum orthogonal dimension of a b
dimensions then divides by 3 to produce a single scalar value that can be used to get an idea of
thesize of a 3D object.
![Current mesh processing pipeline](Figures/current_process.svg)
![Current mesh processing pipeline](Figures/current_process.svg){fig-alt="A diagram illustrating the current mesh processing pipeline. The pipeline begins with a satellite image and converts it into a mesh. The mesh is brought into code represented by summation symbols over x, y, and z. Finally, properties labeled Ix, Iy, and Iz are extracted from the mesh."}
The algorithm's speed is critical not only for the eventual large number of debris pieces that have
to be processed, but many of the data science algorithms we plan on performing on the compiled data
@ -105,7 +106,7 @@ lambda_ratio = cumsum(lambda) ./ sum(lambda)
Then plotting `lambda_ratio`, which is the `cumsum`/`sum` produces the following plot:
![PCA Plot](Figures/pca.png)
![PCA Plot](Figures/pca.png){fig-alt="A line graph depicting the cumulative variance of different components. The x-axis represents the components (Iz, Iy, Ix, body z, body y, body x, Lc, and Surface Area), and the y-axis represents the cumulative variance ranging from 0.988 to 1. The variance is calculated as ∑i=1Mλi/∑λ, where λ represents the eigenvalues. The plot shows a rapid increase in cumulative variance for the first few components, followed by a plateau."}
The current dataset can be described incredibly well just by looking at `Iz`, which again the models
are rotated so that `Iz` is the largest moment of inertia. Then including `Iy` and `Iz` means that a
@ -127,14 +128,14 @@ end
Which produces the following plot:
![Elbow method to determine the required number of clusters.](Figures/kmeans.png)
![Elbow method to determine the required number of clusters.](Figures/kmeans.png){fig-alt="A line graph illustrating the sum of distances to centroids for different numbers of clusters (K). The x-axis represents the number of clusters, ranging from 2 to 20, and the y-axis represents the sum of distances to the centroid. The plot shows a sharp decrease in the sum of distances as the number of clusters increases initially, followed by a gradual flattening of the curve, suggesting diminishing returns with increasing K."}
As can be seen in the above elbow plot, at 6 clusters there is an "elbow" which is where there is a
large drop in the sum distance to the centroid of each cluster which means that it is the optimal
number of clusters. The inertia's can then be plotted using 6 k-means clusters produces the
following plot:
![Moments of Inertia plotted with 6 clusters.](Figures/inertia3d.png)
![Moments of Inertia plotted with 6 clusters.](Figures/inertia3d.png){fig-alt="A 3D scatter plot showing clusters of data points. The plot's axes represent Inertia x, Inertia y, and Inertia z. Six distinct clusters are represented by different colors and labeled accordingly in a legend."}
From this plot it is immediately clear that there are clusters of outliers. These are due to the
different shapes and the extreme values are slender rods or flat plates while the clusters closer to