25 resultados para temporal-logic model
Resumo:
We developed a conceptual ecological model (CEM) for invasive species to help understand the role invasive exotics have in ecosystem ecology and their impacts on restoration activities. Our model, which can be applied to any invasive species, grew from the eco-regional conceptual models developed for Everglades restoration. These models identify ecological drivers, stressors, effects and attributes; we integrated the unique aspects of exotic species invasions and effects into this conceptual hierarchy. We used the model to help identify important aspects of invasion in the development of an invasive exotic plant ecological indicator, which is described a companion paper in this special issue journal. A key aspect of the CEM is that it is a general ecological model that can be tailored to specific cases and species, as the details of any invasion are unique to that invasive species. Our model encompasses the temporal and spatial changes that characterize invasion, identifying the general conditions that allow a species to become invasive in a de novo environment; it then enumerates the possible effects exotic species may have collectively and individually at varying scales and for different ecosystem properties, once a species becomes invasive. The model provides suites of characteristics and processes, as well as hypothesized causal relationships to consider when thinking about the effects or potential effects of an invasive exotic and how restoration efforts will affect these characteristics and processes. In order to illustrate how to use the model as a blueprint for applying a similar approach to other invasive species and ecosystems, we give two examples of using this conceptual model to evaluate the status of two south Florida invasive exotic plant species (melaleuca and Old World climbing fern) and consider potential impacts of these invasive species on restoration.
Resumo:
Geochemical mixing models were used to decipher the dominant source of freshwater (rainfall, canal discharge, or groundwater discharge) to Biscayne Bay, an estuary in south Florida. Discrete samples of precipitation, canal water, groundwater, and bay surface water were collected monthly for 2 years and analyzed for salinity, stable isotopes of oxygen and hydrogen, and Sr2+/Ca2+ concentrations. These geochemical tracers were used in three separate mixing models and then combined to trace the magnitude and timing of the freshwater inputs to the estuary. Fresh groundwater had an isotopic signature (δ 18O = −2.66‰, δD −7.60‰) similar to rainfall (δ 18O = −2.86‰, δD = −4.78‰). Canal water had a heavy isotopic signature (δ 18O = −0.46‰, δD = −2.48‰) due to evaporation. This made it possible to use stable isotopes of oxygen and hydrogen to separate canal water from precipitation and groundwater as a source of freshwater into the bay. A second model using Sr2+/Ca2+ ratios was developed to discern fresh groundwater inputs from precipitation inputs. Groundwater had a Sr2+/Ca2+ ratio of 0.07, while precipitation had a dissimilar ratio of 0.89. When combined, these models showed a freshwater input ratio of canal/precipitation/groundwater of 37%:53%:10% in the wet season and 40%:55%:5% in the dry season with an error of ±25%. For a bay-wide water budget that includes saltwater and freshwater mixing, fresh groundwater accounts for 1–2% of the total fresh and saline water input.
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^
Resumo:
This study explored the critical features of temporal synchrony for the facilitation of prenatal perceptual learning with respect to unimodal stimulation using an animal model, the bobwhite quail. The following related hypotheses were examined: (1) the availability of temporal synchrony is a critical feature to facilitate prenatal perceptual learning, (2) a single temporally synchronous note is sufficient to facilitate prenatal perceptual learning, with respect to unimodal stimulation, and (3) in situations where embryos are exposed to a single temporally synchronous note, facilitated perceptual learning, with respect to unimodal stimulation, will be optimal when the temporally synchronous note occurs at the onset of the stimulation bout. To assess these hypotheses, two experiments were conducted in which quail embryos were exposed to various audio-visual configurations of a bobwhite maternal call and tested at 24 hr after hatching for evidence of facilitated prenatal perceptual learning with respect to unimodal stimulation. Experiment 1 explored if intermodal equivalence was sufficient to facilitate prenatal perceptual learning with respect to unimodal stimulation. A Bimodal Sequential Temporal Equivalence (BSTE) condition was created that provided embryos with sequential auditory and visual stimulation in which the same amodal properties (rate, duration, rhythm) were made available across modalities. Experiment 2 assessed: (a) whether a limited number of temporally synchronous notes are sufficient for facilitated prenatal perceptual learning with respect to unimodal stimulation, and (b) whether there is a relationship between timing of occurrence of a temporally synchronous note and the facilitation of prenatal perceptual learning. Results revealed that prenatal exposure to BSTE was not sufficient to facilitate perceptual learning. In contrast, a maternal call that contained a single temporally synchronous note was sufficient to facilitate embryos’ prenatal perceptual learning with respect to unimodal stimulation. Furthermore, the most salient prenatal condition was that which contained the synchronous note at the onset of the call burst. Embryos’ prenatal perceptual learning of the call was four times faster in this condition than when exposed to a unimodal call. Taken together, bobwhite quail embryos’ remarkable sensitivity to temporal synchrony suggests that this amodal property plays a key role in attention and learning during prenatal development.
Resumo:
Moving objects database systems are the most challenging sub-category among Spatio-Temporal database systems. A database system that updates in real-time the location information of GPS-equipped moving vehicles has to meet even stricter requirements. Currently existing data storage models and indexing mechanisms work well only when the number of moving objects in the system is relatively small. This dissertation research aimed at the real-time tracking and history retrieval of massive numbers of vehicles moving on road networks. A total solution has been provided for the real-time update of the vehicles’ location and motion information, range queries on current and history data, and prediction of vehicles’ movement in the near future. To achieve these goals, a new approach called Segmented Time Associated to Partitioned Space (STAPS) was first proposed in this dissertation for building and manipulating the indexing structures for moving objects databases. Applying the STAPS approach, an indexing structure of associating a time interval tree to each road segment was developed for real-time database systems of vehicles moving on road networks. The indexing structure uses affordable storage to support real-time data updates and efficient query processing. The data update and query processing performance it provides is consistent without restrictions such as a time window or assuming linear moving trajectories. An application system design based on distributed system architecture with centralized organization was developed to maximally support the proposed data and indexing structures. The suggested system architecture is highly scalable and flexible. Finally, based on a real-world application model of vehicles moving in region-wide, main issues on the implementation of such a system were addressed.
Resumo:
The main focus of this thesis was to gain a better understanding about the dynamics of risk perception and its influence on people’s evacuation behavior. Another major focus was to improve our knowledge regarding geo-spatial and temporal variations of risk perception and hurricane evacuation behavior. A longitudinal dataset of more than eight hundred households were collected following two major hurricane events, Ivan and Katrina. The longitudinal survey data was geocoded and a geo-spatial database was integrated to it. The geospatial database was composed of distance, elevation and hazard parameters with respect to the respondent’s household location. A set of Bivariate Probit (BP) model suggests that geospatial variables have had significant influences in explaining hurricane risk perception and evacuation behavior during both hurricanes. The findings also indicated that people made their evacuation decision in coherence with their risk perception. In addition, people updated their hurricane evacuation decision in a subsequent similar event.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
Landscape characteristics, disturbances, and temporal variability influence predator-prey relationships, but are often overlooked in experimental studies. In the Everglades, seasonal disturbances force the spatial overlap of predators and prey, potentially increasing predation risk for prey. This study examined seasonal and diel patterns of fish use of canals and assessed predation risk for small fishes using an encounter rate model. I deployed an imaging sonar in Everglades canals to quantify density and swimming speeds of fishes, and detect anti-predator behaviors by small fishes. Generally, seasonal declines of marsh water-levels increased the density of large fishes in canals. Densities of small and large fishes were positively correlated and, as small-fish density increased, schooling frequency also increased. At night, schools disbanded and small fishes were observed congregating along the canal edge. The encounter rate model predicted highest predator-prey encounters during the day, but access to cover may reduce predation risk for small fishes.
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as f-test is performed during each node’s split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.