Hi Sebastien, - sounds good to me. See also my whitepaper http://earthcube.ning.com/group/earthcube-design-approaches/forum/t...
Would appreciate your comments! - Anna
Thank you for the reference: your white paper does intersect with some of what we (Evan, AJ, Nathaniel and myself) mentioned, and I was glad to read it.
We also thought of Google Earth as a great example for a common visualization tool that could be used as basis for what we envisioned. I am less familiar with the other initiatives you mentioned, but as you noted, everything your proposal and ours suggest relies on existing technologies, and some of it on existing projects that just need to be better shared and extended across disciplines.
The concept of a common platform where models could learn from and inform available data would be a great achievement and I certainly hope to see this materialized in the future.
I also really like the idea of a scientific social network. I would even extend it to published work, where you could directly comment and discuss existing papers with the community.
Finally, we all agree that the first and most important challenge to the community is data formats and documentation: this is the one issue I would like to solved above all else.
I hope your message will be heard!
Thanks! It's great that we are thinking along similar lines.
The problem with Google Earth is that it is unfortunately built with a very major restriction that could be most accurately described as 2.5D + time. In the sense that it supports 3D features on top of the 2D Earth's surface, but it doesn't directly allow cutting through the 3D Earth. This works like a charm for most practical purposes, but won't (alas!) work for the Earth science in general. There might be a way to overcome this restriction (I don't work for Google Earth so can't tell for sure), but so far I think the most anyone could do is to work around it, by implementing 3D features (such as Earth slices) in much the same way the digital architecture is done, on top of the 2D map.
So I think we would essentially have to start afresh to implement something like I'm suggesting, unless Google would be willing to collaborate...
Hey, Anna. Thanks for the paper! It is a good read! As mentioned, your discussion of a data framework and a model framework really intersect with the first two points of our paper.
As Sebastien said, we agree that Google Earth is more of a model we had in mind for how things might work. There are a lot of things I really like about it, including the way you can zoom in from large to small scale so quickly, it can pull it lots of data sources through the Internet seamlessly and fast, it is extensible through drag-and drop KMZ files, its multi-platform, and has fantastic visualization tools.
I took a brief look at the web sites (such as SPUD, GEON, and EMC) of the tools that you discussed in your paper. I've never heard of them before, but they look like they have a lot of promise. I think once we get all of the requirements fleshed out for what we want in our frameworks, a survey of existing tools and projects is really going to need to be done. I bet there are a lot more projects like those out there, and it would be great to get an understanding of what are the capabilities and goals of each of them.I thought about collaborations with Google... they sure do have lot of experience working with large data sets!
Anna Kelbert pointed me at your article. I am particularly interested in the problem of making data
across discipline more readily available on an ongoing basis. I have some notes around this in
One thing I am interested in is the value of a 4d database abstraction as a foundation for some of the higher level needs. Be interested in your thoughts.
A 4D database would be ideal, whether it is common across all geosciences, or fragmented into specific disciplines with common transfer/access protocols. I think the biggest challenge to achieve this goal is going to be to get data owners to agree on good data sharing practices (common formats, up-to-date data, proper documentation...). I think technology solutions will follow as soon as data products are more unified.
Thank you for your notes, I had never heard of SciDB before and I will keep reading about it to understand how it could help the EarthCube initiative.
Hi, Chris. I think your idea of the 4D database extraction speaks very well to our "Issues of Data Usability" and Anna's "Data Framework". We really need to move toward a system where you can easily get ahold of and understand all the data to
I think that's a very good point that needs to be thought through. Personally I have never encountered geophysical data that couldn't be unambiguously mapped to a set of locations in the space-time coordinates, but it is conceivable that such data might exist (and the satellite example that you mention might well be one of these occasions). And of course some information is best described in a different coordinate system. All this would need to be accounted for somehow; certainly coordinate system transformations should be part of the infrastructure that we discuss. Of course for this kind of information to be useful in such framework, one should have the option to map the information to the usual 4D space in a manner that's most appropriate for his or her problem. Can't be more specific than that at this point!
re handling nD cubes, let me point you to
- rasdaman, an nD raster database , an operational system allowing slicing and dicing across space and time simultaneously. Used, for example, in the transatlantic EarthServer initiative 
- the OGC coverage data and service standards which support spatio-temporal data cubes 
I really like your ideas. These cut to the priority issues that we need to wrestle to the ground. How would you see the graduate student training component of this happening. One of the main issues here is that some senior faculty are still doing science in the same manner as when they were trained, which means collecting their own data instead of going out and harvesting the entire set of related data. Could you envision a venue or way that students whose profs did were not into serious global data discovery would be able to be trained in high-level data access and discovery techniques. Similarly with regard to modeling. With few exceptions a lot of modeling codes tend to be "hero codes" and it is very difficult for others to use them, especially if they are not familiar with the programming code. Give me some ideas of how you could see that we can democratize the training and where you see it taking place (in the classroom, online, other?).