# Distributed Training

To build a truly decentralised NFT platform, we must first be able to create an artist with decentralised means or in a decentralised manner. AIRTIST introduces a Generative Adversarial Network mechanism on Blockchain to realise deep neural network training.

By using loosely coupled distributed training methods to accelerate the training process, the data is disassembled and distributed to several loosely coupled nodes (nodes located in different data centres or computing facilities). Each node has a unified network topology but uses its own data for training.

Each node regularly transmits the calculated gradients according to the pre-specified time intervals and updates their own network weights after partial aggregation. This data parallel training method can effectively improve the training speed. At the same time, AIRTIST plans to introduce AutoML technology in the future to optimise network topology based on distributed computing, thereby improving the overall network quality.

![Figure 8: Distributed training](/files/xyGSzAxa02CHzhMUVLUV)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://airtist.gitbook.io/product-docs/overall-consideration/airtist-as-a-decentralised-ecosystem/distributed-training.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
