Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Memory leaks occur in HugeGraph Server during data writing. #2578

Open
1 task done
haohao0103 opened this issue Jul 12, 2024 · 2 comments
Open
1 task done

[Bug] Memory leaks occur in HugeGraph Server during data writing. #2578

haohao0103 opened this issue Jul 12, 2024 · 2 comments
Labels
bug Something isn't working rocksdb RocksDB backend

Comments

@haohao0103
Copy link
Contributor

haohao0103 commented Jul 12, 2024

Bug Type (问题类型)

None

Before submit

  • 我已经确认现有的 IssuesFAQ 中没有相同 / 重复问题 (I have confirmed and searched that there are no similar problems in the historical issue and documents)

Environment (环境信息)

  • Server Version: 1.5.0 (Apache Release Version)
  • Backend: RocksDB 5 nodes, SSD

Expected & Actual behavior (期望与实际表现)

When memory leaks occur in the graphserver during data writing, the distribution of object quantities in the JVM is as follows:
jmap -histo:live 51680 | head -n 10
num #instances #bytes class name (module)

1: 284880553 13509899520 [B ([email protected])
2: 284703909 9110525088 java.lang.String ([email protected])
3: 283905229 6813725496 org.apache.hugegraph.backend.id.IdGenerator$StringId
4: 567813 2284841352 [Lorg.apache.hugegraph.backend.id.Id;
5: 1384040 182210368 [Ljava.lang.Object; ([email protected])
6: 2270975 90839000 java.util.concurrent.ConcurrentLinkedDeque$Node ([email protected])
7: 1191421 76250944 java.util.LinkedHashMap$Entry ([email protected]

The issue was eventually traced to the CachedGraphTransaction, where there is an action to clear edge caches when writing vertices. If a large number of vertices are written, the commitMutation2Backend() method triggers this.notifyChanges(Cache.ACTION_INVALIDED, HugeType.VERTEX, vertexIds), which results in a backlog of tasks in the single-threaded thread pool within the EventHub, holding onto vertxId data and causing a memory leak.

Vertex/Edge example (问题点 / 边数据举例)

No response

Schema [VertexLabel, EdgeLabel, IndexLabel] (元数据结构)

No response

@haohao0103 haohao0103 added the bug Something isn't working label Jul 12, 2024
@dosubot dosubot bot added the rocksdb RocksDB backend label Jul 12, 2024
Copy link

dosubot bot commented Jul 12, 2024

To resolve the memory leak issue in HugeGraph Server during data writing, specifically related to the CachedGraphTransaction and the backlog of tasks in the EventHub, you should ensure that the unlistenChanges method is called to remove the event listeners when they are no longer needed. This will help prevent the accumulation of tasks in the EventHub.

The unlistenChanges method is already defined in the CachedGraphTransaction class and is called in the close method. Ensure that the close method is properly invoked when the transaction is done to clean up the event listeners.

Here is the relevant part of the code:

@Override
public void close() {
    try {
        super.close();
    } finally {
        this.unlistenChanges();
    }
}

private void unlistenChanges() {
    // Unlisten store event
    this.store().provider().unlisten(this.storeEventListener);

    // Unlisten cache event
    EventHub graphEventHub = this.params().graphEventHub();
    graphEventHub.unlisten(Events.CACHE, this.cacheEventListener);
}

Additionally, ensure that the reset method in the GraphTransaction class is properly invoked to clear any accumulated state that might contribute to memory leaks. The reset method clears various collections that hold transaction state, which is crucial for preventing memory leaks.

Here is the relevant part of the reset method:

@Override
protected void reset() {
    super.reset();

    // Clear mutation
    if (this.addedVertices == null || !this.addedVertices.isEmpty()) {
        this.addedVertices = InsertionOrderUtil.newMap();
    }
    if (this.removedVertices == null || !this.removedVertices.isEmpty()) {
        this.removedVertices = InsertionOrderUtil.newMap();
    }
    if (this.updatedVertices == null || !this.updatedVertices.isEmpty()) {
        this.updatedVertices = InsertionOrderUtil.newMap();
    }
    if (this.addedEdges == null || !this.addedEdges.isEmpty()) {
        this.addedEdges = InsertionOrderUtil.newMap();
    }
    if (this.removedEdges == null || !this.removedEdges.isEmpty()) {
        this.removedEdges = InsertionOrderUtil.newMap();
    }
    if (this.updatedEdges == null || !this.updatedEdges.isEmpty()) {
        this.updatedEdges = InsertionOrderUtil.newMap();
    }
    if (this.addedProps == null || !this.addedProps.isEmpty()) {
        this.addedProps = InsertionOrderUtil.newSet();
    }
    if (this.removedProps == null || !this.removedProps.isEmpty()) {
        this.removedProps = InsertionOrderUtil.newSet();
    }
    if (this.updatedOldestProps == null || !this.updatedOldestProps.isEmpty()) {
        this.updatedOldestProps = InsertionOrderUtil.newSet();
    }
}

Make sure that both the close and reset methods are called appropriately in your application to avoid memory leaks [1][2].

To continue talking to Dosu, mention @dosu.

@haohao0103
Copy link
Contributor Author

haohao0103 commented Jul 16, 2024

@imbajin @VGalaxies @JackyYangPassion hello, The code fixed for this issue is in the Hugegraph Commons project, and this is the address of PR: apache/incubator-hugegraph-commons#144

VGalaxies pushed a commit that referenced this issue Aug 3, 2024
… writing (#144)

* #2578
fixed memory leaks occur in HugeGraph Server during data writing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working rocksdb RocksDB backend
Projects
None yet
Development

No branches or pull requests

1 participant