Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-36931][cdc] FlinkCDC YAML supports batch mode #3812

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

aiwenmo
Copy link
Contributor

@aiwenmo aiwenmo commented Dec 23, 2024

Premise

MysqlCDC supports snapshot mode

MysqlCDC in Flink CDC (MySqlSource) supports StartupMode.SNAPSHOT and is of Boundedness.BOUNDED, and can run in RuntimeExecutionMode.BATCH.

Streaming VS Batch

Stream mode is suitable for job types including: jobs with high real-time requirements; in non-real-time scenarios, stateless jobs with many Shuffle steps; jobs that require continuous and stable data processing capabilities; jobs with small states, simple topologies and low fault tolerance costs.

Batch mode is suitable for job types including: in non-real-time scenarios, jobs with a large number of stateful operators; jobs that require high resource utilization; jobs with large states, complex topologies and low fault tolerance costs.

Expectation

Full snapshot synchronization

The FlinkCDC YAML job only reads the full snapshot data of the database and then writes it to the target database in Streaming or Batch mode. It is mainly used for full catch-up.

Currently, the SNAPSHOT startup strategy of the FlinkCDC YAML job can run correctly in the Streaming mode; it cannot run correctly in the Batch mode.

Full-incremental offline

The FlinkCDC YAML job collects full snapshot data + incremental log data from the final Offset of the full-incremental snapshot algorithm to the current EndingOffset for the first run; for subsequent runs, it collects from the last EndingOffset to the current EndingOffset.

The job runs in Batch mode. Users can schedule the job periodically, tolerate data delays for a certain period of time (such as hourly or daily), and ensure eventual consistency. Since the periodically scheduled incremental job only collects logs between the last EndingOffset and the current EndingOffset, duplicate full collection of data is avoided.

Test

Full snapshot synchronization in Batch mode

  1. After removing the PartitionOperator, all operators will be chained into one PipelinedRegion and can run correctly;
  2. When there are multiple PipelinedRegions, only the first PipelinedRegion is in the jobgraph and it cannot run correctly;
  3. After removing the SchemaOperator, when there are multiple PipelinedRegions, a correct jobgraph can also be generated, but the sink requires the registration of the coordinator operator.

Solution

Use StartupMode.SNAPSHOT + Streaming for full snapshot synchronization

There is no need to modify the source code. For MysqlCDC, after specifying StartupMode.SNAPSHOT, the full snapshot synchronization job of the entire database can be run in the streaming mode. Although it is not the optimal solution, this capability can be achieved currently.

Expand the FlinkPipelineComposer applicable to the Batch mode to support full Batch synchronization

Topology graph: Source -> PreTransform -> PostTransform -> Router -> PartitionBy -> Sink

There are no change events in the Batch mode, and Schema Evolution does not need to be considered. In addition, the automatic table creation is completed before the job starts.
The field derivation of transform can be placed before the job starts instead of during runtime. Other operations such as the derivation of Router can also be placed before the job starts.
Workload: Implement the Batch construction strategy of FlinkPipelineComposer. Router needs to be independent, and Sink needs to be extended or transformed to support the implementation that does not require a coordinator (it would be better if Batch writing can be achieved).

Expand StartupMode to support users specifying the Offset range to support incremental offline synchronization

Allow users to specify the collection Offset range of the binlog, and then the user's own platform records the EndingOffset of each execution, as well as the periodic scheduling by the platform.

Discussion

1.Is it necessary to implement support for Batch mode because the benefits brought by Batch are small or the performance is not as good as Streaming. Specifically, which Batch optimizations can be used?

2.Whether the full-incremental offline method should be implemented (users can periodically schedule incremental log synchronization)?

Code implementation

Topology graph: Source -> PreTransform -> PostTransform -> SchemaBatchOperator -> PartitionBy(Batch) -> BatchSink
ps: The data flow only contains CreateTableEvent, CreateTableCompletedEvent, and DataChangeEvent (insert).

Implementation ideas

image

1.Source first issues all CreateTableEvents, then appends a CreateTableCompletedEvent, and then issues snapshot data.
2.PreTransform and PostTransform directly issue the CreateTableCompletedEvent, and there are no changes in other cases.
3.When SchemaBatchOperator receives the CreateTableEvent, it is only stored in the cache and no events are issued.
4.When SchemaBatchOperator receives the CreateTableCompletedEvent, the widest downstream table structure is deduced based on the router rule, and then the table creation statement is executed in the external data source. Subsequently, the wide table structure is issued to BatchPrePartition.
5.BatchPrePartition broadcasts the CreateTableEvent to PostPartition. BatchPrePartition partitions and distributes the DataChangeEvent to PostPartition based on the table ID and primary key information.
6.PostPartition issues the CreateTableEvent and DataChangeEvent to BatchSink, and BatchSink performs batch writing.

Implementation effect

Computing node 1: Source -> PreTransform -> PostTransform -> SchemaBatchOperator -> BatchPrePartition
Computing node 2: PostPartition -> BatchSink
Batch mode: Computing node 2 starts computing only after computing node 1 is completely finished.

@aiwenmo
Copy link
Contributor Author

aiwenmo commented Dec 25, 2024

Code implementation

Topology graph: Source -> PreTransform -> PostTransform -> SchemaBatchOperator-> PartitionBy(Batch) -> BatchSink

  1. add SchemaBatchOperator which removed the processing of schema change event and removed Coordinator.
  2. add RegularPrePartitionBatchOperator which removed SchemaEvolutionClient.
  3. add DataBatchSinkFunctionOperator and DataBatchSinkWriterOperator which removed SchemaEvolutionClient.
  4. remove SchemaRegistry in batch mode.

@aiwenmo
Copy link
Contributor Author

aiwenmo commented Dec 27, 2024

  1. DataSource will send CreateTableCompletedEvent after sending all CreateTableEvent.
  2. add CreateTableCompletedEvent to notify SchemaBatchOperator to merge all CreateTableEvent.

@aiwenmo
Copy link
Contributor Author

aiwenmo commented Dec 31, 2024

During the test, a new bug was discovered and has been fixed. This PR relies on this fix. #3826

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant