Validator Technical Overview
Last updated
Last updated
The validator runs using Electron for cross-platform functionality and runs locally on a user's device, allowing validators to operate seamlessly across various operating systems without compatibility issues.
Validators currently utilize ChatGPT user generated prompts, allowing for a tailored approach to data validation. This setup enables validators to specify the evaluation criteria that best assess the data’s validity based on their expertise.
Over time, validators are expected to become increasingly sophisticated, employing complex algorithms and validation strategies akin to those used by high-frequency trading bots. This evolution will enhance their capability to assess data quickly and accurately. Some possible ways that validators will be different from one another.
Different LLMs (Local and Remote): Validators use a combination of local LLMs for processing sensitive data securely on-premises and remote LLMs hosted in the cloud for greater computational power and scalability.
Customized Prompts for LLMs: By customizing the prompts for LLMs, validators can extract more precise and relevant information from the data, improving the accuracy of their assessments.
Anomaly Detection: Advanced algorithms are employed to identify anomalies or outliers in data sets, crucial for highlighting data that may require additional scrutiny.
Whitelisting Sources: Validators maintain a whitelist of trusted data sources to streamline the validation process for data from established and reliable providers, while still applying rigorous checks for new or lesser-known sources.
Cross-Validation with External Data: Data is cross-checked against external databases or published research to verify its validity, ensuring alignment with established facts or findings.
Sanity Checks: Basic sanity checks are conducted to ensure the logical coherence of the data, filtering out clearly erroneous submissions early in the validation process.
Replication and Silicon-Based Experiments: For critical data, validators may replicate experiments or run simulations (‘in silicon’ experiments) to verify the data’s reliability. This approach includes both physical replication in controlled environments and theoretical simulations to test data against expected outcomes.