Welcome to BlockDAG Dashboard! 👋

notification0
Notifications
logout

Dev Release 110

September 3, 2024

Greetings BlockDAG Community,

Introduction
Today as the development is on verge of completion beforehand the testnet launch the team is working on the restructuring of the code to increase the overall performance of the system.
To elevate the performance, scalability, and responsiveness of our system, we have implemented a comprehensive update to the schema responsible for synchronizing backend queries. This update introduces several technical optimizations, including table restructuring, advanced indexing strategies, and refined data relationships, all aimed at reducing latency and enhancing query efficiency. Below, we delve deeper into the technical details and algorithms that underpin these improvements.

Technical Updates

  1. Schema Restructuring:
  • Table Redesign and Normalization:
  • We implemented a denormalization strategy for high-traffic tables to reduce the number of joins required in query execution. Frequently accessed data fields have been replicated across related tables, optimizing read-heavy operations.
  • In contrast, certain tables underwent normalization to eliminate redundancy and optimize storage efficiency for less frequently accessed data, maintaining a balance between query speed and storage efficiency.
  • Partitioning and Sharding:
  • To handle large datasets more efficiently, we adopted horizontal partitioning based on common query parameters, such as date ranges or geographic regions. This ensures that each query scans a smaller subset of data, reducing read times.
  • Sharding has been applied to distribute data across multiple servers based on specific shard keys, enhancing parallel processing capabilities and supporting higher volumes of concurrent read/write operations.

    Optimized Data Relationships:
  • Foreign Key Redesign:
  • We re-evaluated foreign key constraints and selectively removed non-essential constraints that were causing significant overhead during high-volume insertions or updates. This change reduces locking and improves transaction throughput.
  • Use of Materialized Views:
  • For complex, frequently run queries, we created materialized views that store the result sets, allowing for much faster access times. The views are refreshed periodically to ensure data consistency while significantly improving read performance.

    Advanced Index Optimization:
  • Bitmap Indexing:
  • Bitmap indexes have been utilized for low-cardinality columns to accelerate query performance. These indexes are particularly effective for columns with a limited number of unique values, reducing the data retrieval time.
  • Covering Indexes:
  • We added covering indexes to support specific queries where the indexed fields contain all the columns needed for the query, eliminating the need to access the table rows. This reduces I/O operations and speeds up query execution.
  • Clustered vs. Non-Clustered Indexing:
  • By analyzing query patterns, we switched certain non-clustered indexes to clustered indexes where data is physically ordered on the disk based on the indexed column, improving the speed of range-based searches.

    Improved Data Synchronization Algorithms:
  • Delta Synchronization:
  • Implemented a delta synchronization algorithm that identifies and synchronizes only the data changes since the last update. This approach reduces data transfer size and synchronization time, especially for large datasets.
  • Asynchronous Data Fetching:
  • The synchronization process now leverages asynchronous I/O operations to fetch data from the database, allowing other processes to run concurrently without waiting for I/O completion. This enhances throughput and reduces the time to synchronize data.
  • Batch Processing with Dynamic Batching:
  • Optimized synchronization by introducing a dynamic batching mechanism that adjusts batch sizes based on current server load and network conditions, reducing the likelihood of bottlenecks and improving overall synchronization efficiency.

Algorithms Implemented

  • Cost-Based Query Optimization:
  • The updated schema utilizes a cost-based optimizer (CBO) to determine the most efficient execution plan for queries. The CBO considers multiple factors such as table size, index availability, and data distribution statistics to select the optimal join strategies (e.g., hash join, merge join) and access paths (e.g., index scan, sequential scan).
  • LRU Caching Mechanism:
  • We’ve implemented a Least Recently Used (LRU) caching algorithm to cache frequently accessed data in memory. This reduces the need for repeated disk I/O and speeds up query responses for popular datasets. The cache size dynamically adjusts based on system load to ensure optimal memory utilization.
  • Connection Pooling with Dynamic Scaling:
  • The backend now uses a connection pooling algorithm that scales dynamically based on incoming request rates. By managing a pool of database connections, the system can reuse existing connections, reducing the overhead of establishing new connections for each query and improving overall throughput.


Benefits

  • Enhanced Query Performance:
  • The use of advanced indexing techniques, materialized views, and partitioning has significantly improved the speed and efficiency of both simple and complex queries, providing a more responsive experience to users.
  • Reduced Latency and Increased Throughput:
  • With delta synchronization and asynchronous fetching, data synchronization between the backend and the database is now faster, reducing latency and supporting higher transaction volumes.
  • Scalable and Reliable Architecture:
  • The implementation of sharding, connection pooling, and LRU caching ensures the system is highly scalable and capable of handling increased loads without compromising performance or reliability.
  • Efficient Resource Utilization:
  • The updated schema and optimized query handling mechanisms ensure that resources are used efficiently, minimizing the cost of operation while maximizing performance.

MetaMask Integration for EVM Chains

We are excited to announce that our team has successfully completed the development and integration of MetaMask for EVM-compatible chains on the frontend. This integration allows users to seamlessly connect their MetaMask wallets, enabling smooth interactions with decentralized applications (dApps) and blockchain networks directly from our platform. With this feature, users can manage their digital assets, perform transactions, and interact with smart contracts securely and effortlessly.
Stay tuned for more updates as we continue to enhance our platform's capabilities and provide a more robust and user-friendly experience!

Conclusion

These optimizations represent a significant leap forward in how our backend handles data synchronization and query processing. By integrating advanced algorithms and leveraging optimized data structures, we are better equipped to provide a smoother, faster, and more reliable experience to our users.
Stay tuned for more technical insights as we continue to evolve and enhance our system to meet the growing demands of our users!

Awaiting Apple's Review for BlockDAG X1 Application

Current Status
This week, there haven’t been any major updates on the development side apart from few bug fixes in the application for the BlockDAG X1 application. Our team is currently awaiting feedback from Apple. By the end of the day, Apple responded with additional policy considerations to address in the review process.
Next Steps
We have carefully prepared a comprehensive set of answers and clarifications to address Apple's queries and have submitted them for further review. At this point, we are waiting for Apple's response to determine the next steps in our release process.
Important note: For android users the application is completely running on the latest version and they can access the latest changes or features on android platform. While we're still doing our best to get the new experiences onboard on iOS platform.
Conclusion
We appreciate your patience and understanding during this time. Our team remains committed to resolving any outstanding issues swiftly and efficiently to ensure that the BlockDAG X1 application meets all necessary requirements and is available to our users as soon as possible. Stay tuned for further updates!

BlockDAG LogoBlockDAG Logo