As long as you can express your queries with the filter api, you should have no problem querying that amount of data with Graphcool. We have customers operating millions of nodes without issue. Is that the case?
If for some reason you have to download all the data to the client to perform more complex calculations, then you will see performance degrade as the data size grows.
In either case, I would suggest generating a large body of test data for your data structure and experiment a little to get a feeling for performance. You should also be aware that there is a limit of 1000 nodes returned in a single request on the shared cluster, so if you really want to return all data you will have to implement pagination.