948 resultados para Query-by-example
Resumo:
The purpose of this research was to compare the delivery methods as practiced by higher education faculty teaching distance courses with recommended or emerging standard instructional delivery methods for distance education. Previous research shows that traditional-type instructional strategies have been used in distance education and that there has been no training to distance teach. Secondary data, however, appear to suggest emerging practices which could be pooled toward the development of standards. This is a qualitative study based on the constant comparative analysis approach of grounded theory.^ Participants (N = 5) of this study were full-time faculty teaching distance education courses. The observation method used was unobtrusive content analysis of videotaped instruction. Triangulation of data was accomplished through one-on-one in-depth interviews and from literature review. Due to the addition of non-media content being analyzed, a special time-sampling technique was designed by the researcher--influenced by content analyst theories of media-related data--to sample portions of the videotape instruction that were observed and counted. A standardized interview guide was used to collect data from in-depth interviews. Coding was done based on categories drawn from review of literature, and from Cranton and Weston's (1989) typology of instructional strategies. The data were observed, counted, tabulated, analyzed, and interpreted solely by the researcher. It should be noted however, that systematic and rigorous data collection and analysis led to credible data.^ The findings of this study supported the proposition that there are no standard instructional practices for distance teaching. Further, the findings revealed that of the emerging practices suggested by proponents and by faculty who teach distance education courses, few were practiced even minimally. A noted example was the use of lecture and questioning. Questioning, as a teaching tool was used a great deal, with students at the originating site but not with distance students. Lectures were given, but were mostly conducted in traditional fashion--long in duration and with no interactive component.^ It can be concluded from the findings that while there are no standard practices for instructional delivery for distance education, there appears to be sufficient information from secondary and empirical data to initiate some standard instructional practices. Therefore, grounded in this research data is the theory that the way to arrive at some instructional delivery standards for televised distance education is a pooling of the tacitly agreed-upon emerging practices by proponents and practicing instructors. Implicit in this theory is a need for experimental research so that these emerging practices can be tested, tried, and proven, ultimately resulting in formal standards for instructional delivery in television education. ^
Resumo:
In this study, I determined the identity, taxonomic placement, and distribution of digenetic trematodes parasitizing the snails Pomacea paludosa and Planorbella duryi at Pa-hay-okee, Everglades National Park. I also characterized temporal and geographic variation in the probability of parasite infection for these snails based on two years of sampling. Although studies indicate that digenean parasites may have important effects both on individual species and the structure of communities, there have been no studies of digenean parasitism on snails within the Everglades ecosystem. For example, the endangered Everglade Snail Kite, a specialist that feeds almost exclusively on Pomacea paludosa, and is known to be a definitive host of digenean parasites, may suffer direct and indirect effects from consumption of parasitized apple snails. Therefore, information on the diversity and abundance of parasites harbored in snail populations in the Everglades should be of considerable interest for management and conservation of wildlife. Juvenile digeneans (cercariae) representing 20 species were isolated from these two snails, representing a quadrupling of the number of species known. Species were characterized based on morphological, morphometric, and sequence data (18S rDNA, COI, and ITS). Species richness of shed cercariae from P. duryi was greater than P. paludosa, with 13 and 7 species respectively. These species represented 14 families. P. paludosa and P. duryi had no digenean species in common. Probability of digenean infection was higher for P. duryi than P. paludosa and adults showed a greater risk of infection than juveniles for both of these snails. Planorbella duryi showed variation in probability of infection between sampling sites and hydrological seasons. The number of unique combinations of multi-species infections was greatest among P. duryi individuals, while the overall percentage of multi-species infections was greatest in P. paludosa. Analyses of six frequently-observed multiple infections from P. duryi suggest the presence of negative interactions, positive interactions, and neutral associations between larval digeneans. These results should contribute to an understanding of the factors controlling the abundance and distribution of key species in the Everglades ecosystem and may in particular help in the management and recovery planning for the Everglade Snail Kite.
Resumo:
Graph-structured databases are widely prevalent, and the problem of effective search and retrieval from such graphs has been receiving much attention recently. For example, the Web can be naturally viewed as a graph. Likewise, a relational database can be viewed as a graph where tuples are modeled as vertices connected via foreign-key relationships. Keyword search querying has emerged as one of the most effective paradigms for information discovery, especially over HTML documents in the World Wide Web. One of the key advantages of keyword search querying is its simplicity—users do not have to learn a complex query language, and can issue queries without any prior knowledge about the structure of the underlying data. The purpose of this dissertation was to develop techniques for user-friendly, high quality and efficient searching of graph structured databases. Several ranked search methods on data graphs have been studied in the recent years. Given a top-k keyword search query on a graph and some ranking criteria, a keyword proximity search finds the top-k answers where each answer is a substructure of the graph containing all query keywords, which illustrates the relationship between the keyword present in the graph. We applied keyword proximity search on the web and the page graph of web documents to find top-k answers that satisfy user’s information need and increase user satisfaction. Another effective ranking mechanism applied on data graphs is the authority flow based ranking mechanism. Given a top- k keyword search query on a graph, an authority-flow based search finds the top-k answers where each answer is a node in the graph ranked according to its relevance and importance to the query. We developed techniques that improved the authority flow based search on data graphs by creating a framework to explain and reformulate them taking in to consideration user preferences and feedback. We also applied the proposed graph search techniques for Information Discovery over biological databases. Our algorithms were experimentally evaluated for performance and quality. The quality of our method was compared to current approaches by using user surveys.
Resumo:
With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.
Resumo:
The purpose of this phenomenological study was to describe how Colombian adult English language learners (ELL) select and use language learning strategies (LLS). This study used Oxford’s (1990a) taxonomy for LLS as its theoretical framework. Semi-structured interviews and a focus group interview, were conducted, transcribed, and analyzed for 12 Colombian adult ELL. A communicative activity known as strip story (Gibson, 1975) was used to elicit participants’ use of LLS. This activity preceded the focus group session. Additionally, participants’ reflective journals were collected and analyzed. Data were analyzed using inductive, deductive, and comparative analyses. Four themes emerged from the inductive analysis of the data: (a) learning conditions, (b) problem-solving resources, (c) information processing, and (d) target language practice. Oxford’s classification of LLS was used as a guide in deductively analyzing data concerning the participants’ experiences. The deductive analysis revealed that participants do not use certain strategies included in Oxford’s taxonomy at the third level. For example, semantic mapping, or physical response or sensation was not reported by participants. The findings from the inductive and deductive analyses were then compared to look for patterns and answers to the research questions. The comparative analysis revealed that participants used additional LLS that are not included in Oxford’s taxonomy. Some examples of these strategies are: using sound transcription in native language and help from children. The study was conducted at the MDC InterAmerican campus in South Florida, one of the largest Hispanic-influenced communities in the U.S. Based on the findings from this study, the researcher proposed a framework to study LLS that includes both external (i.e., learning context, community) and internal (i.e., culture, prior education) factors that influence the selection and use of LLS. The findings from this study imply that given the importance of the both external and internal factors in learners’ use of LLS, these factors should be considered for inclusion in any study of language learner strategies use by adult learners. Implications for teaching and learning as well as recommendations for further research are provided.
Resumo:
Modern geographical databases, which are at the core of geographic information systems (GIS), store a rich set of aspatial attributes in addition to geographic data. Typically, aspatial information comes in textual and numeric format. Retrieving information constrained on spatial and aspatial data from geodatabases provides GIS users the ability to perform more interesting spatial analyses, and for applications to support composite location-aware searches; for example, in a real estate database: “Find the nearest homes for sale to my current location that have backyard and whose prices are between $50,000 and $80,000”. Efficient processing of such queries require combined indexing strategies of multiple types of data. Existing spatial query engines commonly apply a two-filter approach (spatial filter followed by nonspatial filter, or viceversa), which can incur large performance overheads. On the other hand, more recently, the amount of geolocation data has grown rapidly in databases due in part to advances in geolocation technologies (e.g., GPS-enabled smartphones) that allow users to associate location data to objects or events. The latter poses potential data ingestion challenges of large data volumes for practical GIS databases. In this dissertation, we first show how indexing spatial data with R-trees (a typical data pre-processing task) can be scaled in MapReduce—a widely-adopted parallel programming model for data intensive problems. The evaluation of our algorithms in a Hadoop cluster showed close to linear scalability in building R-tree indexes. Subsequently, we develop efficient algorithms for processing spatial queries with aspatial conditions. Novel techniques for simultaneously indexing spatial with textual and numeric data are developed to that end. Experimental evaluations with real-world, large spatial datasets measured query response times within the sub-second range for most cases, and up to a few seconds for a small number of cases, which is reasonable for interactive applications. Overall, the previous results show that the MapReduce parallel model is suitable for indexing tasks in spatial databases, and the adequate combination of spatial and aspatial attribute indexes can attain acceptable response times for interactive spatial queries with constraints on aspatial data.
Resumo:
In their discussion - Database System for Alumni Tracking - by Steven Moll, Associate Professor and William O'Brien, Assistant Professor, School of Hospitality Management at Florida International University, Professors Moll and O’Brien initially state: “The authors describe a unique database program which was created to solve problems associated with tracking hospitality majors subsequent to graduation.” “…and please, whatever you do, keep in touch with your school; join an alum’ organization. It is a great way to engage the resources of your school to help further your career,” says Professor Claudia Castillo in addressing a group of students attending her Life after College seminar on 9/18/2009. This is a very good point and it is obviously germane to the article at hand. “One of the greatest strengths of a hospitality management school, a strength that grows with each passing year, is its body of alumni,” say the authors. “Whether in recruiting new students or placing graduates, whether in fund raising or finding scholarship recipients, whatever the task, the network of loyal alumni stands ready to help.” The caveat is the resources are only available if students and school, faculty and alumni can keep track of each other, say professors Moll and O’Brien. The authors want you to know that the practice is now considered essential to success, especially in the hospitality industry whereby the fluid nature of the industry makes networking de rigueur to accomplishment. “When the world was a smaller, slower place, it was fairly easy for graduates to keep track of each other; there weren't that many graduates and they didn't move that often,” say the authors. “Now the hospitality graduate enters an international job market and may move five times in the first four years of employment,” they expand that thought. In the contemporary atmosphere linking human resources from institution to marketplace is relatively easy to do. “How can an association keep track of its graduates? There are many techniques, but all of them depend upon adequate recordkeeping,” Moll and O’Brien answer their own query. “A few years ago that would have meant a group of secretaries; today it means a database system,” they say. Moll and O’Brien discuss the essentials of compiling/programming such a comprehensive data base; the body of information to include, guidelines on the problems encountered, and how to avoid the pitfalls. They use the Florida International University, Hospitality database as a template for their example.
Resumo:
With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.
Resumo:
Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.
Resumo:
From the 12th until the 17th of July 2016, research vessel Maria S. Merian entered the Nordvestfjord of Scorsby Sound (East Greenland) as part of research cruise MSM56, "Ecological chemistry in Arctic fjords". A large variety of chemical and biological parameters of fjord and meltwater were measured during this cruise to characterize biogeochemical fluxes in arctic fjords. The photo documentation described here was a side project. It was started when we were close to the Daugaard-Jensen glacier at the end of the Nordvestfjord and realized that not many people have seen this area before and photos available for scientists are probably rare. These pictures shall help to document climate and landscape changes in a remote area of East Greenland. Pictures were taken with a Panasonic Lumix G6 equipped with either a 14-42 or 45-150 objective (zoom factor available in jpg metadata). Polarizer filters were used on both objectives. The time between taking the pictures and writing down the coordinates was maximally one minute but usually shorter. The uncertainty in position is therefore small as we were steaming slowly most of the time the pictures were taken (i.e. below 5 knots). I assume the uncertainty is in most cases below 200 m radius of the noted position. I did not check the direction I directed the camera to with a compass at the beginning. Hence, the direction that was noted is an approximation based on the navigation map and the positioning of the ship. The uncertainty was probably around +/- 40° but initially (pictures 1-17) perhaps even higher as this documentation was a spontaneous idea and it took some time to get the orientation right. It should be easy, however, to find the location of the mountains and glaciers when being on the respective positions because the mountains have a quite characteristic shape. In a later stage of this documentation, I took pictures from the bridge and used the gyros to approximate the direction the camera was pointed at. Here the uncertainty was much lower (i.e. +/- 20° or better). Directions approximated with the help of gyros have degree values in the overview table. The ship data provided in the MSM56 cruise report will contain all kinds of sensor data from Maria S. Merian sensor setup. This data can also be used to further constrain the position the pictures were taken because the exact time a photo was shot is noted in the metadata of the .jpg photo file. The shipboard clock was set on UTC. It was 57 minutes and 45 seconds behind the time in the camera. For example 12:57:45 on the camera was 12:00:00 UTC on the ship. All pictures provided here can be used for scientific purposes. In case of usage in presentations etc. please acknowledge RV Maria S. Merian (MSM56) and Lennart T. Bach as author. Please inform me and ask for reprint permission in case you want to use the pictures for scientific publications. I would like to thank all participants and the crew of Maria S. Merian Cruise 56 (MSM56, Ecological chemistry in Arctic fjords).
Resumo:
We quantified pigment biomarkers by high performance liquid chromatography (HPLC) to obtain a broad taxonomic classification of microphytobenthos (MPB) (i.e. identification of dominant taxa). Three replicate sediment cores were collected at 0, 50 and 100 m along transects 5-9 in Heron Reef lagoon (n=15) (Fig. 1). Transects 1-4 could not be processed because the means to have the samples analysed by HPLC were not available at the time of field data collection. Cores were stored frozen and scrapes taken from the top of each one and placed in cryovials immersed in dry ice. Samples were sent to the laboratory (CSIRO Marine and Atmospheric Research, Hobart, Australia) where pigments were extracted with 100% acetone during fifteen hours at 4°C after vortex mixing (30 seconds) and sonication (15 minutes). Samples were then centrifuged and filtered prior to the analysis of pigment composition with a Waters - Alliance HPLC system equipped with a photo-diode array detector. Pigments were separated using a Zorbax Eclipse XDB-C8 stainless steel 150 mm x 4.6 mm ID column with 3.5 µm particle size (Agilent Technologies) and a binary gradient system with an elevated column temperature following a modified version of the Van Heukelem and Thomas (2001) method. The separated pigments were detected at 436 nm and identified against standard spectra using Waters Empower software. Standards for HPLC system calibration were obtained from Sigma (USA) and DHI (Denmark).
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
The goal of this research was to determine the composition of boron deposits produced by pyrolysis of boron tribromide, and to use the results to (a) determine the experimental conditions (reaction temperature, etc.) necessary to produce alpha-rhombohedral boron and (b) guide the development/refinement of the pyrolysis experiments such that large, high purity crystals of alpha-rhombohedral boron can be produced with consistency. Developing a method for producing large, high purity alpha-rhombohedral boron crystals is of interest because such crystals could potentially be used to achieve an alpha-rhombohedral boron based neutron detector design (a solid-state detector) that could serve as an alternative to existing neutron detector technologies. The supply of neutron detectors in the United States has been hampered for a number of years due to the current shortage of helium-3 (a gas used in many existing neutron detector technologies); the development of alternative neutron detector technology such as an alpha-rhombohedral boron based detector would help provide a more sustainable supply of neutron detectors in this country. In addition, the prospect/concept of an alpha-rhombohedral boron based neutron detector is attractive because it offers the possibility of achieving a design that is smaller, longer life, less power consuming, and potentially more sensitive than existing neutron detectors. The main difficulty associated with creating an alpha-rhombohedral boron based neutron detector is that producing large, high purity crystals of alpha-rhombohedral boron is extremely challenging. Past researchers have successfully made alpha-rhombohedral boron via a number of methods, but no one has developed a method for consistently producing large, high purity crystals. Alpha-rhombohedral boron is difficult to make because it is only stable at temperatures below around 1100-1200 °C, its formation is very sensitive to impurities, and the conditions necessary for its formation are not fully understood or agreed upon in the literature. In this research, the method of pyrolysis of boron tribromide (hydrogen reduction of boron tribromide) was used to deposit boron on a tantalum filament. The goal was to refine this method, or potentially use it in combination with a second method (amorphous boron crystallization), to the point where it is possible to grow large, high purity alpha-rhombohedral boron crystals with consistency. A pyrolysis apparatus was designed and built, and a number of trials were run to determine the conditions (reaction temperature, etc.) necessary for alpha-rhombohedral boron production. This work was focused on the x-ray diffraction analysis of the boron deposits; x-ray diffraction was performed on a number of samples to determine the types of boron (and other compounds) formed in each trial and to guide the choices of test conditions for subsequent trials. It was found that at low reaction temperatures (in the range of around 830-950 °C), amorphous boron was the primary form of boron produced. Reaction temperatures in the range of around 950-1000 °C yielded various combinations of crystalline boron and amorphous boron. In the first trial performed at a temperature of 950 °C, a mix of amorphous boron and alpha-rhombohedral boron was formed. Using a scanning electron microscope, it was possible to see small alpha-rhombohedral boron crystals (on the order of ~1 micron in size) embedded in the surface of the deposit. In subsequent trials carried out at reaction temperatures in the range of 950 °C – 1000 °C, it was found that various combinations of alpha-rhombohedral boron, beta-rhombohedral boron, and amorphous boron were produced; the results tended to be unpredictable (alpha-rhombohedral boron was not produced in every trial), and the factors leading to success/failure were difficult to pinpoint. These results illustrate how sensitive of a process producing alpha-rhombohedral boron can be, and indicate that further improvements to the test apparatus and test conditions (for example, higher purity/cleanliness) may be necessary to optimize the boron deposition. Although alpha-rhombohedral boron crystals of large size were not achieved, this research was successful in (a) developing a pyrolysis apparatus and test procedure that can serve as a platform for future testing, (b) determining reaction temperatures at which alpha-rhombohedral boron can form, and (c) developing a consistent process for analyzing the boron deposits and determining their composition. Further experimentation is necessary to achieve a pyrolysis apparatus and test procedure that can yield large alpha-rhombohedral boron crystals with consistency.