Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

*: improve the user experience for using Ctrl+C to make tidb exit #58537

Merged
merged 1 commit into from
Dec 26, 2024

Conversation

tiancaiamao
Copy link
Contributor

What problem does this PR solve?

Issue Number: close #58418

Problem Summary:

What changed and how does it work?

Continue to fix some place to make Ctrl+C work as expected and improve the user experience.

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No need to test
    • I checked and no code files have been changed.

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

Please refer to Release Notes Language Style Guide to write a quality release note.

None

@ti-chi-bot ti-chi-bot bot added release-note-none Denotes a PR that doesn't merit a release note. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Dec 25, 2024
Copy link

tiprow bot commented Dec 25, 2024

Hi @tiancaiamao. Thanks for your PR.

PRs from untrusted users cannot be marked as trusted with /ok-to-test in this repo meaning untrusted PR authors can never trigger tests themselves. Collaborators can still trigger tests on the PR using /test all.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Comment on lines +328 to 331
resourcemanager.InstanceResourceManager.Stop()
cleanup(svr, storage, dom)
cpuprofile.StopCPUProfiler()
resourcemanager.InstanceResourceManager.Stop()
executor.Stop()
Copy link
Contributor Author

@tiancaiamao tiancaiamao Dec 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I am debugging, I always see the resource manager goroutine not exited.
This service is not depended by other services, it can exit earlier, so I adjust the order here.

@@ -1182,7 +1182,7 @@ func (do *Domain) loadSchemaInLoop(ctx context.Context) {
case <-do.exit:
return
}
do.refreshMDLCheckTableInfo()
do.refreshMDLCheckTableInfo(ctx)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

goroutine 507 [select]:
github.com/tikv/client-go/v2/config/retry.(*Config).createBackoffFn.newBackoffFn.func2({0x70d3428, 0xc047076990}, 0xffffffffffffffff)
	/home/genius/go/pkg/mod/github.com/tikv/client-go/[email protected]/config/retry/config.go:204 +0x4fd
github.com/tikv/client-go/v2/config/retry.(*Backoffer).BackoffWithCfgAndMaxSleep(0xc070e36cf0, 0xc00169e990, 0xffffffffffffffff, {0x7099b00, 0xc0d8e1e198})
	/home/genius/go/pkg/mod/github.com/tikv/client-go/[email protected]/config/retry/backoff.go:195 +0x66d
github.com/tikv/client-go/v2/config/retry.(*Backoffer).Backoff(0xc070e36cf0, 0xc00169e990, {0x7099b00, 0xc0d8e1e198})
	/home/genius/go/pkg/mod/github.com/tikv/client-go/[email protected]/config/retry/backoff.go:122 +0x23e
github.com/pingcap/tidb/pkg/store/driver/backoff.(*Backoffer).Backoff(0x4?, 0x42?, {0x7099b00?, 0xc0d8e1e198?})
	/home/genius/project/src/github.com/pingcap/tidb/pkg/store/driver/backoff/backoff.go:55 +0x25
github.com/pingcap/tidb/pkg/store/copr.(*copIteratorWorker).handleCopResponse(0xc0459213b0, 0xc0705d8438, 0x0, 0xc0f767c870, {0xc03ea91080, 0x170, 0x170}, 0xc0232d4880, 0xc0e7c80a80, 0x12d2c)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/store/copr/coprocessor.go:1541 +0x445
github.com/pingcap/tidb/pkg/store/copr.(*copIteratorWorker).handleTaskOnce(0xc0459213b0, 0xc0705d8438, 0xc0e7c80a80)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/store/copr/coprocessor.go:1425 +0x102b
github.com/pingcap/tidb/pkg/store/copr.(*liteCopIteratorWorker).liteSendReq(0xc0470769c0, {0x70d3428?, 0xc0470357d0?}, 0xc0abcdeb40)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/store/copr/coprocessor.go:1186 +0x1c5
github.com/pingcap/tidb/pkg/store/copr.(*copIterator).Next(0xc0abcdeb40, {0x70d3428?, 0xc0470357d0?})
	/home/genius/project/src/github.com/pingcap/tidb/pkg/store/copr/coprocessor.go:1109 +0x1d7
github.com/pingcap/tidb/pkg/distsql.(*selectResult).fetchResp(0xc04abe4700, {0x70d3428, 0xc0470357d0})
	/home/genius/project/src/github.com/pingcap/tidb/pkg/distsql/select_result.go:318 +0x82
github.com/pingcap/tidb/pkg/distsql.(*selectResult).Next(0xc04abe4700, {0x70d3428, 0xc0470357d0}, 0xc0edd1c7d0)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/distsql/select_result.go:384 +0x9d
github.com/pingcap/tidb/pkg/executor.(*tableResultHandler).nextChunk(0x70d3428?, {0x70d3428?, 0xc0470357d0?}, 0x1f?)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/executor/table_reader.go:612 +0xa2
github.com/pingcap/tidb/pkg/executor.(*TableReaderExecutor).Next(0xc259214288, {0x70d3428, 0xc0470357d0}, 0xc0edd1c7d0)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/executor/table_reader.go:331 +0x213
github.com/pingcap/tidb/pkg/executor/internal/exec.Next({0x70d3428, 0xc0470357d0}, {0x710b080, 0xc259214288}, 0xc0edd1c7d0)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/executor/internal/exec/executor.go:460 +0x29f
github.com/pingcap/tidb/pkg/executor.(*ExecStmt).next(0xc04abe4540, {0x70d3428, 0xc0470357d0}, {0x710b080, 0xc259214288}, 0xc0edd1c7d0)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/executor/adapter.go:1269 +0x6e
github.com/pingcap/tidb/pkg/executor.(*recordSet).Next(0xc045921420, {0x70d3428?, 0xc0470357d0?}, 0xc0edd1c7d0)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/executor/adapter.go:172 +0x108
github.com/pingcap/tidb/pkg/session.drainRecordSet({0x70d3428, 0xc0470357d0}, 0xc07ff386c8, {0x70d3ed8, 0xc02aa83040}, {0x0?, 0x0?})
	/home/genius/project/src/github.com/pingcap/tidb/pkg/session/session.go:1238 +0xe4
github.com/pingcap/tidb/pkg/session.(*session).ExecRestrictedSQL.func1({0x70d3428, 0xc0470354d0}, 0xc07ff386c8)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/session/session.go:1981 +0x2d1
github.com/pingcap/tidb/pkg/session.(*session).withRestrictedSQLExecutor(0xc06e9a8248, {0x70d3428, 0xc0470354d0}, {0x0, 0x0, 0xc109bcb000?}, 0xc0380ebb20)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/session/session.go:1954 +0x2d2
github.com/pingcap/tidb/pkg/session.(*session).ExecRestrictedSQL(0x2?, {0x70d3428?, 0xc0470354d0?}, {0x0?, 0x0?, 0xc006b916c0?}, {0xc08250ca80?, 0x0?}, {0x0, 0x0, ...})
	/home/genius/project/src/github.com/pingcap/tidb/pkg/session/session.go:1958 +0x88
github.com/pingcap/tidb/pkg/domain.(*Domain).refreshMDLCheckTableInfo(0xc001b76288)
	/home/genius/project/src/github.com/pingcap/tidb/pkg/domain/domain.go:1027 +0x42a
github.com/pingcap/tidb/pkg/domain.(*Domain).loadSchemaInLoop(0xc001b76288, {0x70d3460, 0xc002164f00})
	/home/genius/project/src/github.com/pingcap/tidb/pkg/domain/domain.go:1185 +0xde
github.com/pingcap/tidb/pkg/domain.(*Domain).Start.func2()
	/home/genius/project/src/github.com/pingcap/tidb/pkg/domain/domain.go:1534 +0x25
github.com/pingcap/tidb/pkg/util.(*WaitGroupEnhancedWrapper).Run.func1()
	/home/genius/project/src/github.com/pingcap/tidb/pkg/util/wait_group_wrapper.go:103 +0x5c
created by github.com/pingcap/tidb/pkg/util.(*WaitGroupEnhancedWrapper).Run in goroutine 1
	/home/genius/project/src/github.com/pingcap/tidb/pkg/util/wait_group_wrapper.go:98 +0xae

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not passing the ctx correctly cause the loadSchemaInLoop fail to exit, the stack is above

@@ -32,7 +32,7 @@ func SetupSignalHandler(shutdownFunc func()) {

signal.Notify(usrDefSignalChan, syscall.SIGUSR1)
go func() {
buf := make([]byte, 1<<16)
buf := make([]byte, 1<<17)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kill -USR1 {tidb-pid}

This will print the goroutine stacks.
When tidb exit, and the 10080 port is closed, it's difficult to check what tidb is executing and where it hang.
So this command is useful.

The problem is that 64k is not enough for dumping the whole stacks sometimes.
So some goroutine stacks are losing ...

Increase the buffer to 128K here.

source string
registerProcess sync.Map
source string
mu struct {
Copy link
Contributor Author

@tiancaiamao tiancaiamao Dec 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we configure this in the tidb toml configuration file

tidb-enable-exit-check = true

We can get the unexited services ... but it is not seem to be enabled by default.

So anyway, we have dlv, we can attach the tidb process to see where it blocks.
We can checkout this goroutine stack, and print the value of registerProcess to see which services are unexited.

The old code use sync.Map, which is opaque, not debuggable.
Change it to map is much easier for debugging.

I don't think the performance really matters here, so mutex is prefered than sync.Map.

@@ -56,7 +59,7 @@ func (w *WaitGroupEnhancedWrapper) checkUnExitedProcess(exit chan struct{}) {
<-exit
logutil.BgLogger().Info("waitGroupWrapper start exit-checking", zap.String("source", w.source))
if w.check() {
ticker := time.NewTimer(2 * time.Second)
ticker := time.NewTicker(2 * time.Second)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bug fix, it should be a ticker instead of a timer.
Before the fix, the log only print twice .....

After fix, it works and helps me to find the unexited services:

[2024/12/25 15:09:52.072 +08:00] [WARN] [wait_group_wrapper.go:78] ["background process unexited while received exited signal"] [process="[PlanReplayerTaskDumpHandle,analyzeJobsCleanupWorker,PlanReplayerTaskCollectHandle,autoAnalyzeWorker,loadStatsWorker,asyncLoadHistogram,runawayWatchSyncLoop,mdlCheckLoop,logBackupAdvancer,LoadSysVarCacheLoop,globalConfigSyncerKeeper,closestReplicaReadCheckLoop,updateStatsWorker,loadSigningCertLoop,HistoricalStatsWorker,indexUsageWorker,topNSlowQueryLoop,infoSyncerKeeper,requestUnitsWriterLoop,topologySyncerKeeper,loadPrivilegeInLoop,loadSchemaInLoop,globalBindHandleWorkerLoop,distTaskFrameworkLoop]"] [source=domain]
[2024/12/25 15:09:54.073 +08:00] [WARN] [wait_group_wrapper.go:78] ["background process unexited while received exited signal"] [process="[loadSchemaInLoop,globalBindHandleWorkerLoop,PlanReplayerTaskCollectHandle,autoAnalyzeWorker,runawayWatchSyncLoop,updateStatsWorker]"] [source=domain]
[2024/12/25 16:17:02.259 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadSchemaInLoop,distTaskFrameworkLoop,logBackupAdvancer,PlanReplayerTaskDumpHandle,planCacheEvictTrigger,globalConfigSyncerKeeper,closestReplicaReadCheckLoop,LoadSysVarCacheLoop,globalBindHandleWorkerLoop,analyzeJobsCleanupWorker,PlanReplayerTaskCollectHandle,updateStatsWorker,topNSlowQueryLoop,loadPrivilegeInLoop,runawayWatchSyncLoop,HistoricalStatsWorker,dumpFileGcChecker,loadStatsWorker,autoAnalyzeWorker]"] [source=domain]
[2024/12/25 16:17:04.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[LoadSysVarCacheLoop,globalBindHandleWorkerLoop,analyzeJobsCleanupWorker,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker,autoAnalyzeWorker,loadSchemaInLoop]"] [source=domain]
[2024/12/25 16:17:06.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[globalBindHandleWorkerLoop,analyzeJobsCleanupWorker,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop,LoadSysVarCacheLoop]"] [source=domain]
[2024/12/25 16:17:08.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadSchemaInLoop,LoadSysVarCacheLoop,globalBindHandleWorkerLoop,analyzeJobsCleanupWorker,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker]"] [source=domain]
[2024/12/25 16:17:10.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadSchemaInLoop,LoadSysVarCacheLoop,globalBindHandleWorkerLoop,analyzeJobsCleanupWorker,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker]"] [source=domain]
[2024/12/25 16:17:12.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadStatsWorker,loadSchemaInLoop,LoadSysVarCacheLoop,globalBindHandleWorkerLoop,analyzeJobsCleanupWorker,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop]"] [source=domain]
[2024/12/25 16:17:14.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop,LoadSysVarCacheLoop,globalBindHandleWorkerLoop,analyzeJobsCleanupWorker]"] [source=domain]
[2024/12/25 16:17:16.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[LoadSysVarCacheLoop,globalBindHandleWorkerLoop,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop]"] [source=domain]
[2024/12/25 16:17:18.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadStatsWorker,loadSchemaInLoop,LoadSysVarCacheLoop,globalBindHandleWorkerLoop,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop]"] [source=domain]
[2024/12/25 16:17:20.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[globalBindHandleWorkerLoop,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop,LoadSysVarCacheLoop]"] [source=domain]
[2024/12/25 16:17:22.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop,LoadSysVarCacheLoop,globalBindHandleWorkerLoop]"] [source=domain]
[2024/12/25 16:17:24.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[LoadSysVarCacheLoop,globalBindHandleWorkerLoop,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop]"] [source=domain]
[2024/12/25 16:17:26.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadSchemaInLoop,LoadSysVarCacheLoop,globalBindHandleWorkerLoop,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker]"] [source=domain]
[2024/12/25 16:17:28.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[LoadSysVarCacheLoop,globalBindHandleWorkerLoop,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop]"] [source=domain]
[2024/12/25 16:17:30.261 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop,LoadSysVarCacheLoop,globalBindHandleWorkerLoop,PlanReplayerTaskCollectHandle,updateStatsWorker]"] [source=domain]
[2024/12/25 16:17:32.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop,globalBindHandleWorkerLoop,PlanReplayerTaskCollectHandle,updateStatsWorker]"] [source=domain]
[2024/12/25 16:17:34.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadStatsWorker,loadSchemaInLoop,globalBindHandleWorkerLoop,PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop]"] [source=domain]
[2024/12/25 16:17:36.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop,globalBindHandleWorkerLoop]"] [source=domain]
[2024/12/25 16:17:38.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[PlanReplayerTaskCollectHandle,updateStatsWorker,runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop,globalBindHandleWorkerLoop]"] [source=domain]
[2024/12/25 16:17:40.261 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[globalBindHandleWorkerLoop,PlanReplayerTaskCollectHandle,runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop]"] [source=domain]
[2024/12/25 16:17:42.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[PlanReplayerTaskCollectHandle,runawayWatchSyncLoop,loadStatsWorker,loadSchemaInLoop,globalBindHandleWorkerLoop]"] [source=domain]
[2024/12/25 16:17:44.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[runawayWatchSyncLoop,loadSchemaInLoop,PlanReplayerTaskCollectHandle]"] [source=domain]
[2024/12/25 16:17:46.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[runawayWatchSyncLoop,loadSchemaInLoop,PlanReplayerTaskCollectHandle]"] [source=domain]
[2024/12/25 16:17:48.261 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[PlanReplayerTaskCollectHandle,runawayWatchSyncLoop,loadSchemaInLoop]"] [source=domain]
[2024/12/25 16:17:50.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadSchemaInLoop,PlanReplayerTaskCollectHandle,runawayWatchSyncLoop]"] [source=domain]
[2024/12/25 16:17:52.261 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[runawayWatchSyncLoop,loadSchemaInLoop]"] [source=domain]
[2024/12/25 16:17:54.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[runawayWatchSyncLoop,loadSchemaInLoop]"] [source=domain]
[2024/12/25 16:17:56.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadSchemaInLoop,runawayWatchSyncLoop]"] [source=domain]
[2024/12/25 16:17:58.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadSchemaInLoop,runawayWatchSyncLoop]"] [source=domain]
[2024/12/25 16:18:00.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[runawayWatchSyncLoop,loadSchemaInLoop]"] [source=domain]
[2024/12/25 16:18:02.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[runawayWatchSyncLoop,loadSchemaInLoop]"] [source=domain]
[2024/12/25 16:18:04.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadSchemaInLoop,runawayWatchSyncLoop]"] [source=domain]
[2024/12/25 16:18:06.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[loadSchemaInLoop,runawayWatchSyncLoop]"] [source=domain]
[2024/12/25 16:18:08.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process="[runawayWatchSyncLoop,loadSchemaInLoop]"] [source=domain]
[2024/12/25 16:18:10.260 +08:00] [WARN] [wait_group_wrapper.go:82] ["background process unexited while received exited signal"] [process

Copy link

codecov bot commented Dec 25, 2024

Codecov Report

Attention: Patch coverage is 95.23810% with 1 line in your changes missing coverage. Please review.

Project coverage is 75.2707%. Comparing base (444a1b9) to head (0a6384c).
Report is 14 commits behind head on master.

Additional details and impacted files
@@               Coverage Diff                @@
##             master     #58537        +/-   ##
================================================
+ Coverage   73.5444%   75.2707%   +1.7263%     
================================================
  Files          1681       1726        +45     
  Lines        464420     475801     +11381     
================================================
+ Hits         341555     358139     +16584     
+ Misses       102004      95549      -6455     
- Partials      20861      22113      +1252     
Flag Coverage Δ
integration 49.4399% <71.4285%> (?)
unit 72.6878% <95.0000%> (+0.3919%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
dumpling 52.6910% <ø> (ø)
parser ∅ <ø> (∅)
br 61.7880% <ø> (+16.0075%) ⬆️

@tiancaiamao tiancaiamao requested review from lance6716 and D3Hunter and removed request for lance6716 December 25, 2024 11:06
@ti-chi-bot ti-chi-bot bot added approved needs-1-more-lgtm Indicates a PR needs 1 more LGTM. labels Dec 25, 2024
Copy link

ti-chi-bot bot commented Dec 26, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: D3Hunter, hawkingrei

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot added lgtm and removed needs-1-more-lgtm Indicates a PR needs 1 more LGTM. labels Dec 26, 2024
Copy link

ti-chi-bot bot commented Dec 26, 2024

[LGTM Timeline notifier]

Timeline:

  • 2024-12-25 12:47:53.353318753 +0000 UTC m=+1652263.442121296: ☑️ agreed by D3Hunter.
  • 2024-12-26 06:57:24.58594121 +0000 UTC m=+1717634.674743751: ☑️ agreed by hawkingrei.

@tiancaiamao
Copy link
Contributor Author

/retest

Copy link

tiprow bot commented Dec 26, 2024

@tiancaiamao: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test message.

In response to this:

/retest

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@ti-chi-bot ti-chi-bot bot merged commit 30d8684 into pingcap:master Dec 26, 2024
26 checks passed
@tiancaiamao tiancaiamao deleted the exit1 branch December 26, 2024 13:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved lgtm release-note-none Denotes a PR that doesn't merit a release note. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

tikv OOM first, the Ctrl+C fail to make tidb exit
3 participants