How much of engineering is required for designing / evolving +information machines so that they can always differentiate between data and metadata and further go beyond to create knowledge and insights out of it. While thinking of this question. I cam across the role played by Search Engines in this. No doubt, search engines are information machines. They learn the semantic graphs of information relationships in a sea of indexes. Let me bring some metrics here. How will we measure the effectiveness of search engines as information machines. Do we need some axis to plot / visualize their effectiveness.
One of my favorite questions will be how much of data / information / relationships / indexes / semantic webs can be converted to working knowledge by Search engines? Rather can they do this task at all without the intervention of human interactions, at all !
Going by the same lines, my categorization / differential positioning of metadata with respect to data will not be merely based on the relationship between the meaning and associative positioning of data nodes. It will be rather based on how one node of data connects the other node of data to a third node of data. So data and +metadata should always have more than one connecting dot between them. Thus when we use a search engine to find all the metadata about a particular data node, it should find the meta data based on this semantic graph.