MTRL_DATASET - Material Analysis Tools
Overview
This project contains a dataset from research on a data-informed digital twin for large-scale 3D printing. The data was collected through a series of experiments using two machines: (i) a Kuka KR50R2500 industrial robot and (ii) a MAI® MULTIMIX-3D mortar mixing pump. Additionally, measurements of the printed object's width were recorded during the experiments.
The dataset has been used to explore correlations between machine performance, material behavior, and the final printed structure. The accompanying codebase enables replication of the study’s results, offering tools for data processing, clustering, visualization, and the analysis of various material properties and printing parameters.
The experiments in this study were conducted in two phases:
- Correlation Analysis: The first phase focused on exploring the relationships between machine performance, material behavior, and the resulting printed object. Data from these experiments was then used to develop a clustering-based prediction model and a set of feedback control services to automate the operation of the Kuka robot and the mortar mixing pump.
- Evaluation of Feedback Control: In the second phase, the feedback control services were implemented in a new set of experiments to assess their effectiveness in optimizing the 3D printing process.
Project Structure
MTRL_DATASET/
│
├── Data/
│ ├── Block_Tests/
│ │ ├── BlockTasks.json - Tasks data for block (evaluation ) tests
│ │ └── BlockMeasurements.json - Width measurements for different block types
│ │
│ └── Mixture_Experiments/
│ ├── PumpResponse_Clean.json - Clean pump response data for mixture experiments (used for clustering)
│ ├── PumpTasks.json - Task data for pump operations
│ └── WidthCorrelationTests.json - Data for width correlation tests
│
├── Clusters/ - Directory for saved ML models
│ ├── kmeans.pkl - Trained KMeans model
│ └── scaler.pkl - Fitted StandardScaler
|
├── Figs/ -Directory of all plots from the code
|
├── Blocks.py - Block data analysis and visualization
├── BlockMeasurements.py - Analysis of block measurement data
├── Clusters.py - Machine learning clustering of pump data
├── PumpData.py - Pump data analysis utilities
├── WidthCorrelations.py - Analysis of correlations between printed object width, Kuka robot velocity and pump reqeuncy
└── README.md - This file
Data Description
01_Mixture Experiments Data
The experiments in this section were done with 4 types of mixtures as follows:
Mix Clay (kg) Sand (kg) Water (kg) Flow Table (mm) Density (g/L) M1 5.00 7.50 3.00 169 2080 M2 5.00 7.50 2.75 163 2100 M3 5.00 7.50 2.50 147 2140 M4 5.00 7.50 2.25 127 2196 PumpResponse_Clean.json
Contains cleaned pump response data including:
- Pump output power and current measurements
- Mortar temperature readings
- Pump pressure readings
- Temporal data for pump operations
PumpTasks.json
Contains task information related to pump operations:
- Task details for mixture experiments
- Timing information for pump actions
- Operational parameters and settings
WidthCorrelationTests.json
Contains test data for analyzing correlations between:
- Width measurements
- Pump frequency
- Robot velocity
02_Block Tests Data
The experiments in this section were conducted on 3 block types with the following features:
Block Velocity (m/s) Frequency (Hz) B1 0.11 17 B2 Adaptive 17 B3 0.11 Adaptive BlockMeasurements.json
Contains width measurements (in mm) for three different blocks. This data is used for comparing width consistency across different block printing strategies.
BlockTasks.json
Contains detailed task data from the pump and the robot for block printing processes, including:
- Task IDs and names
- Task types (ReadData, Read, SetValue)
- Actors (KukaPassiveRead, MaiPrinter)
- Timestamps for start and end times
- Job identification (e.g., "BLOCK1")
- Processing levels and indices
The file contains over 2300 task entries.
03_Features
Within both the BlockTasks.json and PumpTask.json files, every data point adheres to a standardized Task Data Schema that includes the following fields:
- _id: A unique identifier for this record.
- task_id: A unique identifier for the specific task or operation.
- name: The name assigned to the task or process.
- type: The category or type of operation (e.g., a data reading action).
- main_actor: The primary machine or module responsible for carrying out the task.
- description: A brief textual description of the task.
- message: A log message or status note associated with the operation.
- element_id: A list of identifiers for related design elements (if any).
- actors_data: Contains information for the actors involved in the task (nested details are omitted).
- job: The job or process identifier associated with this record.
- level: The hierarchical level or depth in a process, indicating its relative position.
- index: A numerical order or position indicator for the task.
- start_time: The timestamp marking the start of the task or operation.
- end_time: The timestamp marking when the task was completed.
- response: Contains the outcome or data returned by the task (nested details are omitted).
- versions: Version control or metadata information regarding this record.
- project: The identifier for the project to which this record belongs.
- author: The user or entity that created or is responsible for this record.
04_Scripts
PumpData.py
Analyzing and visualising pump tasks data for the 4 mixtures.
WidthCorrelations.py
Analyzes and visualises correlations between width measurements, robot velocity, and pump frequency.
Clusters.py
Implements machine learning clustering on pump data and visualises the results.
Blocks.py
Analyzes task data for different block types, calculates timing metrics, and visualises the results.
BlockMeasurements.py
Analyzes and visualises block width measurement data.
05_Usage
Dependencies
This project requires the following Python packages:
- pandas
- numpy
- matplotlib
- scikit-learn
- joblib
Install dependencies with:
pip install pandas numpy matplotlib scikit-learn joblib
Run the code
To generate the full analysis and plots from the dataset, simply execute the main.py file. Open your terminal, navigate to the code directory, and run:
cd path/to/your/code/directory
python main.py
For a more detailed overview of available parameters, run each of the individual Python files separately.
(2025-03-07)