Symposium Schedule - 21 May, 1997

8:15 - 8:55COFFEE
8:55 - 9:00INTRODUCTION
Hugh Wilson - Texas A&M Bioinformatics Working Group
9:00 - 9:30WWW AND BIODIVERSITY DATA - THE CANADIAN PERSPECTIVE
Larry Speers - Taxonomic Information Systems, Crop Protection, Eastern Cereal and Oilseed Research Centre, Research Branch, Agriculture and Agri-Food Canada
Canada has been actively involved with the development of the 'Clearing House Mechanism' as defined in the Convention on Biological Diversity. In addition, many suppliers of Canadian biodiversiy information are using the WWW to reach potential audiences. The information being made available ranges from data about vouchered material in natural history collections (DAO Herbarium Type Specimens ; Canadian National Collection of Insects Type Specimens ; Beetles of Canada and Alaska), to observational information on species characteristics and distributions (Lygus Bugs of the Canadian Prairies) through databases of summary information from the literature (Poisonous Plants, Scale Insects ). Information on Canada's biodiversity at the community level is also being made available . In general it has been found that the barriers to development of these kinds of usable products are not technical but are more the result of the social characteristics of the organizations and individuals who control access to the information. Some aspects of these barriers to development and possible ways they might be over come will be discussed.
9:30-10:00THE BIODIVERSITY INFORMATION SYSTEM OF MEXICO AND THE WWW
Jorge Soberón Mainero - Executive Secretary, National Commission for the Knowledge and Use of Biodiversity (CONABIO)
In 1993 the Mexican Government started the design and implementation of a system of information capable of answering questions related to legislation, sustainable use and conservation of the different levels of biodiversity. In this presentation I discuss its main features and the role of the Internet in its operation and future. The core of the information system is the specimens data, which is produced and maintained by the professional (sometimes amateur) taxonomists and their institutions. The specimens information allows linking ecosystemic data, on one hand, and populations and species data on the other. The users of the information are scientists, different agencies of the government, and private citizens and institutions. The web page of CONABIO will be the main mechanism for distributing the data to the users. Currently, the page contains directories of experts, information about protected and priority areas, and it will also provide direct access to the databases. Most specimen information is not yet available because of restrictions imposed by providers. Major issues in implementing this technology are quality control of the data and ensuring a widespread participation and willingness of scientist to allow their data to be placed on the web.
10:00-10:30BREAK
10:30-11:00BUILDING THE FLORA OF NORTH AMERICA INTERNET INFORMATION SERVICE
John L. Schnase - Director, Center for Botanical Informatics, LLC, Missouri Botanical Garden [Dr. Schnase has been forced for cancel - this talk will be presented by an MBG 'team' that includes Kay L. Tomlinson (Assistant Director, Center for Botanical Informatics), James L. Zarucchi (Managing Editor, Flora of North America), Mark A. Spasser (Bioinformatics Coordinator, Center for Botanical Informatics), and J. Alfredo Sanchez (Universidad de las Americas, Puebla, Mexico)]
The Flora of North America project is becoming a fully electronic floristic research project. The result will be an ever-expanding, continually refined Internet information service containing scientifically authoritative, up-to-date information on the approximately 20,000 species of vascular plants and bryophytes of North America north of Mexico. The FNA IIS will contain a rich mix of documents, maps, illustrations, computational tools, library services, and, perhaps most important, FNA's manuscript database. With over 750 contributing scientists, FNA is one of the nation's largest collaborative research projects. The project began several years ago as a paper-based publishing effort; however, traditional publishing methods have not scaled to the task, and FNA has had difficulty meeting its production schedule. By making FNA a Web-based enterprise, we hope to significantly improve the productivity and cost-effectiveness of the project. This transformation is forcing us to confront difficult challenges that lie at the intersection of electronic publishing, large-scale scientific database activities, Internet-based project coordination, and digital libraries.
11:00-11:30PROACTIVE PARTNERSHIPS ARE ESSENTIAL TO INCREASE BIODIVERSITY DATA AVAILABILITY
J. Scott Peterson - Director, National Plant Data Center, USDA, National Resources Conservation Service
Partnerships are essential in today's climate of tight budgets, downsizing, and increased competition for funding. The W3 is making this easier. Development of the PLANTS project by NRCS has been dependent on cooperation with other projects (ex. Biota of North America Program and Flora of North America ), plant science specialists, and agencies (ex. Agricultural Research Service, Animal and Plant Health Inspection Service, and Geological Service). Cooperation has also been essential for the Interagency Taxonomic Information System (ITIS), which involves several agencies working proactively to find commonality; sharing in the development of the processes and information systems; and cooperating with the scientific community to develop the data. Cooperation on these two projects is also reaching beyond our traditional borders to other institutions, such as the International Organization for Plant Information, Species 2000, CONABIO, Agriculture Canada, and the Botanischer Garten und Botanisches Museum-Berlin. It is important that we convey our products not just among ourselves in the government and scientific communities, but also to the general public who are our end of the line clients.
11:30-12:30LUNCH
12:30 - 1:00DISTRIBUTED DATA AND THE FLORA OF TEXAS PROJECT
John Leggett - Director, Center for the Study of Digital Libraries, Department of Computer Science, Texas A&M University
The Bioinformatics Working Group at Texas A&M University has been exploring new methods to develop, manipulate, and express biodiversity data via the World Wide Web. Prototype systems, developed in collaboration with the Flora of Texas Consortium, have focused on mapped visualization of distribution/diversity data and rapid access to information and expertise drawn from a group of collaborating institutions. The prospect of expressing networked, distributed data, a central feature of the WWW environment, forces consideration of problems involving standardization, merger, and dynamic data development. Examples of both prospects and problems will be provided by examination of systems now under development for the Flora of Texas Project. The talk will briefly cover two prospects for the future - end-user collections management and using interchange formats instead of global schemas and one conspicuous problem - the lack of graduate-level academic programs in bioinformatics.
1:00- 1:30CONCERNS REGARDING INTEGRITY OF BIOLOGICAL DATA VIA THE INTERNET
John Kartesz - Director, Biota of North America Program, North Carolina Botanical Garden, University of North Carolina
The expanding interest in the Internet as a public resource for sharing information has placed heavy demands on the biological community, for high standards regarding data development, maintenance, and integrity. These demands are exceptionally challenging today, due to the large database systems (those of 100 megabytes or more, housing thousands of individually linked fields) that are currently functional, and being shared by multiple data centers. Regrettably, even the most carefully constructed of these databases, which incorporate exceedingly precise data bridges linking various data sets, are proving to be somewhat imperfect, occasionally prone to malfunction, and unable to accommodate all the necessary linking procedures. Such failures ultimately lead to data fragmentation problems, requiring specific datum modifications to be completed by hand, which commonly result in errors. Once data are released from the centers of development, they are obtained by data maintenance centers, where data integrity problems are exacerbated further, by data manipulation procedures employed by managers and programmers, who often have specific objectives of making the data comply with local, regional, or even national goals. The unfortunate consequence of this manipulation is that the data themselves are often modified or "embellished", which leads to even more errors. Once the user community is able to access these data via the Internet, unless specific guidelines and definitions are provided, the data are often corrupted further by misinterpretations and misunderstandings. A possible solution is for the various data centers to establish tighter controls governing data dispersal processes, to work collaboratively toward building better data bridges, and to incorporate more stringent data standards and restrictions, which must include minimally more comprehensive definitions, specific dates of data releases, and guidelines regarding the data themselves.
1:30 - 2:00BREAK
2:00 - 4:00PANEL DISCUSSION

Last update: 19 May 1997.