Etherscan Launches AI-Powered Code Reader
On June 19, Etherscan, the renowned Ethereum block explorer and analytics platform, introduced a revolutionary tool named “Code Reader”. Using artificial intelligence (AI), the Code Reader retrieves and interprets the source code of a specified contract address. With a user prompt input, it generates a response via OpenAI’s large language model (LLM), offering unique insights into the contract’s source code files.
How Code Reader Works
According to Etherscan developers, to utilize this tool, one needs a valid OpenAI API Key and adequate OpenAI usage limits. Notably, the tool does not store users’ API keys.
The Code Reader’s applications range from providing AI-generated explanations for deeper insight into contract codes, to supplying comprehensive lists of smart contract functions associated with Ethereum data. Furthermore, it aids users in understanding how the foundational contract interacts with decentralized applications (dApps).
After retrieving the contract files, users can select a specific source code file to peruse. “Moreover, you can modify the source code directly within the user interface before sharing it with the AI,” developers added.
Concerns Amid the AI Boom
Despite the increasing integration of AI into various applications, some experts have expressed concern about the feasibility of current AI models. A recent report by Singapore-based venture capital firm, Foresight Ventures, suggests that “computing power resources will be the next significant battleground in the coming decade.”
As the demand for large AI model training surges in decentralized distributed computing power networks, researchers note several constraints. These include complex data synchronization, network optimization, and data privacy and security concerns.
Size Matters in AI Training
Foresight researchers highlighted that training a large model with 175 billion parameters with single-precision floating-point representation would necessitate roughly 700 gigabytes. Distributed training, however, requires frequent transmission and updating of these parameters between computing nodes. In a scenario involving 100 computing nodes, with each node needing to update all parameters at each step, the model would need to transmit about 70 terabytes of data per second, greatly surpassing most networks’ capacity.
Researchers concluded, “In most scenarios, small AI models are still a more feasible choice and should not be overlooked too early amid the Fear of Missing Out (FOMO) on large models.”