Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about PubMed performance #7

Open
ltz0120 opened this issue Jan 30, 2022 · 6 comments
Open

Question about PubMed performance #7

ltz0120 opened this issue Jan 30, 2022 · 6 comments

Comments

@ltz0120
Copy link

ltz0120 commented Jan 30, 2022

Hi,

In your paper, GRACE achieves 86.7% in Pubmed, and DGI achieves 86% in Pubmed. However, in the DGI paper, the performance of DGI only achieves 76.8% in Pubmed. I also notice you follow the DGI setting in your experiments. How did you make an improvement of almost 10% on DGI?

@downeykking
Copy link

The dataset spilt setting is different. DGI uses public spilt while GRACE uses 10% train, 10% valid, and 80% test random spilt.

@gaoshihao333
Copy link

The dataset spilt setting is different. DGI uses public spilt while GRACE uses 10% train, 10% valid, and 80% test random spilt.

GRACE train/test split follows a 90% / 10% . Isn't that it?

@downeykking
Copy link

The dataset spilt setting is different. DGI uses public spilt while GRACE uses 10% train, 10% valid, and 80% test random spilt.

GRACE train/test split follows a 90% / 10% . Isn't that it?

In paper,

On these citation networks, we randomly select 10% of the nodes as the training set, 10% nodes as the validation set, and leave the rest nodes as the test set.

train/valid/test split follows a 10%/10%/80%.

And in code, train/test split follows a 10%/90%.
https://github.com/CRIPAC-DIG/GRACE/blob/master/train.py#L41C4
https://github.com/CRIPAC-DIG/GRACE/blob/master/eval.py#L58

@gaoshihao333
Copy link

The dataset spilt setting is different. DGI uses public spilt while GRACE uses 10% train, 10% valid, and 80% test random spilt.

GRACE train/test split follows a 90% / 10% . Isn't that it?

In paper,

On these citation networks, we randomly select 10% of the nodes as the training set, 10% nodes as the validation set, and leave the rest nodes as the test set.

train/valid/test split follows a 10%/10%/80%.

And in code, train/test split follows a 10%/90%. https://github.com/CRIPAC-DIG/GRACE/blob/master/train.py#L41C4 https://github.com/CRIPAC-DIG/GRACE/blob/master/eval.py#L58

Thanks for answering! So, no validation sets are used in the code?

@gaoshihao333
Copy link

The dataset spilt setting is different. DGI uses public spilt while GRACE uses 10% train, 10% valid, and 80% test random spilt.

GRACE train/test split follows a 90% / 10% . Isn't that it?

In paper,

On these citation networks, we randomly select 10% of the nodes as the training set, 10% nodes as the validation set, and leave the rest nodes as the test set.

train/valid/test split follows a 10%/10%/80%.
And in code, train/test split follows a 10%/90%. https://github.com/CRIPAC-DIG/GRACE/blob/master/train.py#L41C4 https://github.com/CRIPAC-DIG/GRACE/blob/master/eval.py#L58

Thanks for answering! So, no validation sets are used in the code?

GCA?Are validation sets used in GCA?

@downeykking
Copy link

The dataset spilt setting is different. DGI uses public spilt while GRACE uses 10% train, 10% valid, and 80% test random spilt.

GRACE train/test split follows a 90% / 10% . Isn't that it?

In paper,

On these citation networks, we randomly select 10% of the nodes as the training set, 10% nodes as the validation set, and leave the rest nodes as the test set.

train/valid/test split follows a 10%/10%/80%.
And in code, train/test split follows a 10%/90%. https://github.com/CRIPAC-DIG/GRACE/blob/master/train.py#L41C4 https://github.com/CRIPAC-DIG/GRACE/blob/master/eval.py#L58

Thanks for answering! So, no validation sets are used in the code?

GCA?Are validation sets used in GCA?

GCA utilizes validation sets, whereas GRACE does not incorporate validation sets. You can refer to their eval.py.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants