1
0
mirror of https://gitlab.com/Anson-Projects/projects.git synced 2025-07-28 00:51:29 +00:00

Add alt text and meta descriptions to every document

This commit is contained in:
2024-11-24 20:16:35 +00:00
parent a5d831884a
commit f2c52e4b4b
8 changed files with 38 additions and 28 deletions

View File

@@ -80,12 +80,12 @@ The materials and construction methods used to create satellites are changing ra
Below in @fig-comparison, you can see an example of the current decision tree method used by DebriSat alongside the advanced 3D scan data science pipeline that we propose. Our method utilizes complex analysis only possible with a high-resolution scan of the models and then uses a variety of machine learning and data science techniques to process the data into useful metrics. Our method is a modern approach that can eventually be developed into complex simulations of debris.
![DebriSat Approach versus Our Approach](uripropcomparison.svg){#fig-comparison}
![DebriSat Approach versus Our Approach](uripropcomparison.svg){#fig-comparison fig-alt="A flowchart comparing two approaches to debris satellite analysis. The Traditional Approach begins with inspecting the object and determining if it is flexible. If yes, it is labeled FLEXIBLE. If not, it is checked if one axis is significantly longer than others. If yes, it is labeled Bent Plate. If not, the process continues to other classifications. The Our Approach imports scanned geometry into MATLAB, derives data such as moment of inertia, center of mass, aerodynamic drag, density, and material from the 3D scans, and processes the data in a machine learning pipeline to determine impact lethality, accurate orbit propagation, and predictions for future impacts."}
Enough samples to get the project started have been provided by Dr. Madler, but as of now, they are entirely uncharacterized. The first step towards characterizing the debris we have is to manually organize them into different clusters. The clusters are based on similar characteristics that can be observed visually to produce a preliminary characterization and are just meant to be a starting point for the MATLAB code. Then three to five samples from each cluster will be scanned to give a somewhat even distribution of what we expect MATLAB to provide for each cluster. When clustering using machine learning methods, every cluster must have a few pieces to ensure minimal outliers in the data. As more data becomes available, the machine learning methods get more powerful. Before being put into MATLAB, every scan will be uploaded into CATIA to take data from the scans and clean up the model. CATIA makes some of the desired characteristics of the debris samples, such as the moment of inertia, the center of gravity, etc., very easy to collect. Future iterations of this project will likely do all processing in MATLAB to reduce the manual labor required for each piece of debris.
Below in @fig-debris is a render of a real piece of debris scanned by the Rapid Prototyping Lab on campus. Even after reducing the number of points provided by the scanner, the final model has over 1.2 million points, which is an impressive resolution given that the model is only a few inches in length on its longest axis. With debris created by hyper-velocity impacts having such complex shapes, it becomes clear almost immediately that the geometry is far too complex for any sort of meaningful characterization by a human without machine learning techniques. This issue is compounded by the fact that satellites comprise many exotic materials. The DebriSat program uses a simplified satellite to reduce costs, and it still comprises 14 different categories of materials where a category is primarily a way to determine how dense the material is [@cowardin2019updates] and not for each unique material. This also means that the shapes vary wildly since PCBs, wires, batteries, and the aluminum structure reacts entirely differently to a hypervelocity collision. The example in @fig-debris, and every piece of debris we have at our disposal, are from a hypervelocity impact involving aluminum sheet metal. A dataset of one material type is beneficial at this point since our dataset is still small; it makes sense to start our characterization with a single type of debris.
![Sample Debris Scans](long.png){#fig-debris}
![Sample Debris Scans](long.png){#fig-debris fig-alt="A 3d renderr of a thin piece of metal with a lot of bends."}
Our data collection process gives us much more data than the traditional methods, so machine learning is required to make sense of the data. The first step towards processing our data once it has been tabulated into MATLAB is to perform a principal component analysis (PCA). Utilizing PCA has two significant benefits at this stage of the project in that it reduces the required size of our dataset and decreases the amount of computational power to process the dataset. Reducing our datasets dimensionality will allow us to derive what aspects of the orbital debris are truly important for the classification. This may be easy for a human to discern at this stage of the project, but the DebriSat database has almost 200,000 pieces of debris cataloged [@carrasquilla_debrisat_2019], so it is essential to start with an approach that is adaptable to big data and is robust enough to handle the metrics we are trying to classify that are very complex. Once PCA has reduced the dataset, it can be clustered using the k-means method. K-means is a method of categorizing large, complex datasets using pattern recognition. Depending on which insight we are looking for, k-means could produce a valuable result, or it could be a step to much more advanced machine learning methods of analysis.