938 resultados para Predictive models


Relevância:

70.00% 70.00%

Publicador:

Resumo:

A field study in three vineyards in southern Queensland (Australia) was carried out to develop predictive models for individual leaf area and shoot leaf area of two cultivars (Cabernet Sauvignon and Shiraz) of grapevines (Vitis Vinifera L.). Digital image analysis was used to measure leaf vein length and leaf area. Stepwise regressions of untransformed and transformed models consisting of up to six predictor variables for leaf area and three predictor variables for shoot leaf area were carried out to obtain the most efficient models. High correlation coefficients were found for log10 transformed individual leaf and shoot leaf area models. The significance of predictor variables in the models varied across vineyards and cultivars, demonstrating the discontinuous and heterogeneous nature of vineyards. The application of this work in a grapevine modeling environment and in a dynamic vineyard management context are discussed. Sample sizes for quantification of individual leaf areas and areas of leaves on shoots are proposed based on target margins of errors of sampled data.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Space-for-time substitution is often used in predictive models because long-term time-series data are not available. Critics of this method suggest factors other than the target driver may affect ecosystem response and could vary spatially, producing misleading results. Monitoring data from the Florida Everglades were used to test whether spatial data can be substituted for temporal data in forecasting models. Spatial models that predicted bluefin killifish (Lucania goodei) population response to a drying event performed comparably and sometimes better than temporal models. Models worked best when results were not extrapolated beyond the range of variation encompassed by the original dataset. These results were compared to other studies to determine whether ecosystem features influence whether space-for-time substitution is feasible. Taken in the context of other studies, these results suggest space-for-time substitution may work best in ecosystems with low beta-diversity, high connectivity between sites, and small lag in organismal response to the driver variable.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Space-for-time substitution is often used in predictive models because long-term time-series data are not available. Critics of this method suggest factors other than the target driver may affect ecosystem response and could vary spatially, producing misleading results. Monitoring data from the Florida Everglades were used to test whether spatial data can be substituted for temporal data in forecasting models. Spatial models that predicted bluefin killifish (Lucania goodei) population response to a drying event performed comparably and sometimes better than temporal models. Models worked best when results were not extrapolated beyond the range of variation encompassed by the original dataset. These results were compared to other studies to determine whether ecosystem features influence whether space-for-time substitution is feasible. Taken in the context of other studies, these results suggest space-fortime substitution may work best in ecosystems with low beta-diversity, high connectivity between sites, and small lag in organismal response to the driver variable.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

To provide biological insights into transcriptional regulation, a couple of groups have recently presented models relating the promoter DNA-bound transcription factors (TFs) to downstream gene’s mean transcript level or transcript production rates over time. However, transcript production is dynamic in response to changes of TF concentrations over time. Also, TFs are not the only factors binding to promoters; other DNA binding factors (DBFs) bind as well, especially nucleosomes, resulting in competition between DBFs for binding at same genomic location. Additionally, not only TFs, but also some other elements regulate transcription. Within core promoter, various regulatory elements influence RNAPII recruitment, PIC formation, RNAPII searching for TSS, and RNAPII initiating transcription. Moreover, it is proposed that downstream from TSS, nucleosomes resist RNAPII elongation.

Here, we provide a machine learning framework to predict transcript production rates from DNA sequences. We applied this framework in the S. cerevisiae yeast for two scenarios: a) to predict the dynamic transcript production rate during the cell cycle for native promoters; b) to predict the mean transcript production rate over time for synthetic promoters. As far as we know, our framework is the first successful attempt to have a model that can predict dynamic transcript production rates from DNA sequences only: with cell cycle data set, we got Pearson correlation coefficient Cp = 0.751 and coefficient of determination r2 = 0.564 on test set for predicting dynamic transcript production rate over time. Also, for DREAM6 Gene Promoter Expression Prediction challenge, our fitted model outperformed all participant teams, best of all teams, and a model combining best team’s k-mer based sequence features and another paper’s biologically mechanistic features, in terms of all scoring metrics.

Moreover, our framework shows its capability of identifying generalizable fea- tures by interpreting the highly predictive models, and thereby provide support for associated hypothesized mechanisms about transcriptional regulation. With the learned sparse linear models, we got results supporting the following biological insights: a) TFs govern the probability of RNAPII recruitment and initiation possibly through interactions with PIC components and transcription cofactors; b) the core promoter amplifies the transcript production probably by influencing PIC formation, RNAPII recruitment, DNA melting, RNAPII searching for and selecting TSS, releasing RNAPII from general transcription factors, and thereby initiation; c) there is strong transcriptional synergy between TFs and core promoter elements; d) the regulatory elements within core promoter region are more than TATA box and nucleosome free region, suggesting the existence of still unidentified TAF-dependent and cofactor-dependent core promoter elements in yeast S. cerevisiae; e) nucleosome occupancy is helpful for representing +1 and -1 nucleosomes’ regulatory roles on transcription.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Aim When faced with dichotomous events, such as the presence or absence of a species, discrimination capacity (the ability to separate the instances of presence from the instances of absence) is usually the only characteristic that is assessed in the evaluation of the performance of predictive models. Although neglected, calibration or reliability (how well the estimated probability of presence represents the observed proportion of presences) is another aspect of the performance of predictive models that provides important information. In this study, we explore how changes in the distribution of the probability of presence make discrimination capacity a context-dependent characteristic of models. For the first time,we explain the implications that ignoring the context dependence of discrimination can have in the interpretation of species distribution models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper deals with the problem of using the data mining models in a real-world situation where the user can not provide all the inputs with which the predictive model is built. A learning system framework, Query Based Learning System (QBLS), is developed for improving the performance of the predictive models in practice where not all inputs are available for querying to the system. The automatic feature selection algorithm called Query Based Feature Selection (QBFS) is developed for selecting features to obtain a balance between the relative minimum subset of features and the relative maximum classification accuracy. Performance of the QBLS system and the QBFS algorithm is successfully demonstrated with a real-world application

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Harmful Algal Blooms (HABs) are a worldwide problem that have been increasing in frequency and extent over the past several decades. HABs severely damage aquatic ecosystems by destroying benthic habitat, reducing invertebrate and fish populations and affecting larger species such as dugong that rely on seagrasses for food. Few statistical models for predicting HAB occurrences have been developed, and in common with most predictive models in ecology, those that have been developed do not fully account for uncertainties in parameters and model structure. This makes management decisions based on these predictions more risky than might be supposed. We used a probit time series model and Bayesian Model Averaging (BMA) to predict occurrences of blooms of Lyngbya majuscula, a toxic cyanophyte, in Deception Bay, Queensland, Australia. We found a suite of useful predictors for HAB occurrence, with Temperature figuring prominently in models with the majority of posterior support, and a model consisting of the single covariate average monthly minimum temperature showed by far the greatest posterior support. A comparison of alternative model averaging strategies was made with one strategy using the full posterior distribution and a simpler approach that utilised the majority of the posterior distribution for predictions but with vastly fewer models. Both BMA approaches showed excellent predictive performance with little difference in their predictive capacity. Applications of BMA are still rare in ecology, particularly in management settings. This study demonstrates the power of BMA as an important management tool that is capable of high predictive performance while fully accounting for both parameter and model uncertainty.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cold-formed steel members have been widely used in residential, industrial and commercial buildings as primary load bearing structural elements and non-load bearing structural elements (partitions) due to their advantages such as higher strength to weight ratio over the other structural materials such as hot-rolled steel, timber and concrete. Cold-formed steel members are often made from thin steel sheets and hence they are more susceptible to various buckling modes. Generally short columns are susceptible to local or distortional buckling while long columns to flexural or flexural-torsional buckling. Fire safety design of building structures is an essential requirement as fire events can cause loss of property and lives. Therefore it is essential to understand the fire performance of light gauge cold-formed steel structures under fire conditions. The buckling behaviour of cold-formed steel compression members under fire conditions is not well investigated yet and hence there is a lack of knowledge on the fire performance of cold-formed steel compression members. Current cold-formed steel design standards do not provide adequate design guidelines for the fire design of cold-formed steel compression members. Therefore a research project based on extensive experimental and numerical studies was undertaken at the Queensland University of Technology to investigate the buckling behaviour of light gauge cold-formed steel compression members under simulated fire conditions. As the first phase of this research, a detailed review was undertaken on the mechanical properties of light gauge cold-formed steels at elevated temperatures and the most reliable predictive models for mechanical properties and stress-strain models based on detailed experimental investigations were identified. Their accuracy was verified experimentally by carrying out a series of tensile coupon tests at ambient and elevated temperatures. As the second phase of this research, local buckling behaviour was investigated based on the experimental and numerical investigations at ambient and elevated temperatures. First a series of 91 local buckling tests was carried out at ambient and elevated temperatures on lipped and unlipped channels made of G250-0.95, G550-0.95, G250-1.95 and G450-1.90 cold-formed steels. Suitable finite element models were then developed to simulate the experimental conditions. These models were converted to ideal finite element models to undertake detailed parametric study. Finally all the ultimate load capacity results for local buckling were compared with the available design methods based on AS/NZS 4600, BS 5950 Part 5, Eurocode 3 Part 1.2 and the direct strength method (DSM), and suitable recommendations were made for the fire design of cold-formed steel compression members subject to local buckling. As the third phase of this research, flexural-torsional buckling behaviour was investigated experimentally and numerically. Two series of 39 flexural-torsional buckling tests were undertaken at ambient and elevated temperatures. The first series consisted 2800 mm long columns of G550-0.95, G250-1.95 and G450-1.90 cold-formed steel lipped channel columns while the second series contained 1800 mm long lipped channel columns of the same steel thickness and strength grades. All the experimental tests were simulated using a suitable finite element model, and the same model was used in a detailed parametric study following validation. Based on the comparison of results from the experimental and parametric studies with the available design methods, suitable design recommendations were made. This thesis presents a detailed description of the experimental and numerical studies undertaken on the mechanical properties and the local and flexural-torsional bucking behaviour of cold-formed steel compression member at ambient and elevated temperatures. It also describes the currently available ambient temperature design methods and their accuracy when used for fire design with appropriately reduced mechanical properties at elevated temperatures. Available fire design methods are also included and their accuracy in predicting the ultimate load capacity at elevated temperatures was investigated. This research has shown that the current ambient temperature design methods are capable of predicting the local and flexural-torsional buckling capacities of cold-formed steel compression members at elevated temperatures with the use of reduced mechanical properties. However, the elevated temperature design method in Eurocode 3 Part 1.2 is overly conservative and hence unsuitable, particularly in the case of flexural-torsional buckling at elevated temperatures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The studies in the thesis were derived from a program of research focused on centre-based child care in Australia. The studies constituted an ecological analysis as they examined proximal and distal factors which have the potential to affect children's developmental opportunities (Bronfenbrenner, 1979). The project was conducted in thirty-two child care centres located in south-east Queensland. Participants in the research included staff members at the centres, families using the centres and their children. The first study described the personal and professional characteristics of one hundred and forty-four child care workers, as well as their job satisfaction and job commitment. Factors impinging on the stability of care afforded to children were examined, specifically child care workers' intentions to leave their current position and actual staff turnover at a twelve month follow-up. This is an ecosystem analysis (Bronfenbrenner & Crouter, 1983), as it examined the world of work for carers; a setting not directly involving the developing child, but which has implications for children's experiences. Staff job satisfaction was focused on working with children and other adults, including parents and colleagues. Involvement with children was reported as being the most rewarding aspect of the work. This intrinsic satisfaction was enough to sustain caregivers' efforts to maintain their employment in child care programs. It was found that, while improving working conditions may help to reduce turnover, it is likely that moderate turnover rates will remain as child care staff work in relatively small centres and they leave in order to improve career prospects. Departure from a child care job appeared to be as much about improving career opportunities or changing personal circumstances, as it was about poor wages and working conditions. In the second study, factors that influence maternal satisfaction with child care arrangements were examined. The focus included examination of the nature and qualities of parental interaction with staff. This was a mesosystem analysis (Bronfenbrenner & Crouter, 1983), as it considered the links between family and child care settings. Two hundred and twenty-two questionnaires were returned from mothers whose children were enrolled in the participating centres. It was found that maternal satisfaction with child care encompassed the domains of child-centred and parent-centred satisfaction. The nature and range of responses in the quantitative and qualitative data indicated that these parents were genuinely satisfied with their children's care. In the prediction of maternal satisfaction with child care, single parents, mothers with high role satisfaction, and mothers who were satisfied with the frequency of staff contact and degree of supportive communication had higher levels of satisfaction with their child care arrangements. The third study described the structural and process variations within child care programs and examined program differences for compliance with regulations and differences by profit status of the centre, as a microsystem analysis (Bronfenbrenner, 1979). Observations were made in eighty-three programs which served children from two to five years. The results of the study affirmed beliefs that nonprofit centres are superior in the quality of care provided, although this was not to a level which meant that the care in for-profit centres was inadequate. Regulation of structural features of child care programs, per se, did not guarantee higher quality child care as measured by global or process indicators. The final study represented an integration of a range of influences in child care and family settings which may impact on development. Features of child care programs which predict children's social and cognitive development, while taking into account child and family characteristics, were identified. Results were consistent with other research findings which show that child and family characteristics and child care quality predict children's development. Child care quality was more important to the prediction of social development, while family factors appeared to be more predictive of cognitive/language development. An influential variable predictive of development was the period of time which the child had been in the centre. This highlighted the importance of the stability of child care arrangements. Child care quality features which had most influence were global ratings of the qualities of the program environment. However, results need to be interpreted cautiously as the explained variance in the predictive models developed was low. The results of these studies are discussed in terms of the implications for practice and future research. Considerations for an expanded view of ecological approaches to child care research are outlined. Issues discussed include the need to generate child care research which is relevant to social policy development, the implications of market driven policies for child care services, professionalism and professionalisation of child care work, and the need to reconceptualise child care research when the goal is to develop greater theoretical understanding about child care environments and developmental processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One major gap in transportation system safety management is the ability to assess the safety ramifications of design changes for both new road projects and modifications to existing roads. To fulfill this need, FHWA and its many partners are developing a safety forecasting tool, the Interactive Highway Safety Design Model (IHSDM). The tool will be used by roadway design engineers, safety analysts, and planners throughout the United States. As such, the statistical models embedded in IHSDM will need to be able to forecast safety impacts under a wide range of roadway configurations and environmental conditions for a wide range of driver populations and will need to be able to capture elements of driving risk across states. One of the IHSDM algorithms developed by FHWA and its contractors is for forecasting accidents on rural road segments and rural intersections. The methodological approach is to use predictive models for specific base conditions, with traffic volume information as the sole explanatory variable for crashes, and then to apply regional or state calibration factors and accident modification factors (AMFs) to estimate the impact on accidents of geometric characteristics that differ from the base model conditions. In the majority of past approaches, AMFs are derived from parameter estimates associated with the explanatory variables. A recent study for FHWA used a multistate database to examine in detail the use of the algorithm with the base model-AMF approach and explored alternative base model forms as well as the use of full models that included nontraffic-related variables and other approaches to estimate AMFs. That research effort is reported. The results support the IHSDM methodology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many studies focused on the development of crash prediction models have resulted in aggregate crash prediction models to quantify the safety effects of geometric, traffic, and environmental factors on the expected number of total, fatal, injury, and/or property damage crashes at specific locations. Crash prediction models focused on predicting different crash types, however, have rarely been developed. Crash type models are useful for at least three reasons. The first is motivated by the need to identify sites that are high risk with respect to specific crash types but that may not be revealed through crash totals. Second, countermeasures are likely to affect only a subset of all crashes—usually called target crashes—and so examination of crash types will lead to improved ability to identify effective countermeasures. Finally, there is a priori reason to believe that different crash types (e.g., rear-end, angle, etc.) are associated with road geometry, the environment, and traffic variables in different ways and as a result justify the estimation of individual predictive models. The objectives of this paper are to (1) demonstrate that different crash types are associated to predictor variables in different ways (as theorized) and (2) show that estimation of crash type models may lead to greater insights regarding crash occurrence and countermeasure effectiveness. This paper first describes the estimation results of crash prediction models for angle, head-on, rear-end, sideswipe (same direction and opposite direction), and pedestrian-involved crash types. Serving as a basis for comparison, a crash prediction model is estimated for total crashes. Based on 837 motor vehicle crashes collected on two-lane rural intersections in the state of Georgia, six prediction models are estimated resulting in two Poisson (P) models and four NB (NB) models. The analysis reveals that factors such as the annual average daily traffic, the presence of turning lanes, and the number of driveways have a positive association with each type of crash, whereas median widths and the presence of lighting are negatively associated. For the best fitting models covariates are related to crash types in different ways, suggesting that crash types are associated with different precrash conditions and that modeling total crash frequency may not be helpful for identifying specific countermeasures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The concept of non-destructive testing (NDT) of materials and structures is of immense importance in engineering and medicine. Several NDT methods including electromagnetic (EM)-based e.g. X-ray and Infrared; ultrasound; and S-waves have been proposed for medical applications. This paper evaluates the viability of near infrared (NIR) spectroscopy, an EM method for rapid non-destructive evaluation of articular cartilage. Specifically, we tested the hypothesis that there is a correlation between the NIR spectrum and the physical and mechanical characteristics of articular cartilage such as thickness, stress and stiffness. Intact, visually normal cartilage-on-bone plugs from 2-3yr old bovine patellae were exposed to NIR light from a diffuse reflectance fibre-optic probe and tested mechanically to obtain their thickness, stress, and stiffness. Multivariate statistical analysis-based predictive models relating articular cartilage NIR spectra to these characterising parameters were developed. Our results show that there is a varying degree of correlation between the different parameters and the NIR spectra of the samples with R2 varying between 65 and 93%. We therefore conclude that NIR can be used to determine, nondestructively, the physical and functional characteristics of articular cartilage.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Harmful Algal Blooms (HABs) have become an important environmental concern along the western coast of the United States. Toxic and noxious blooms adversely impact the economies of coastal communities in the region, pose risks to human health, and cause mortality events that have resulted in the deaths of thousands of fish, marine mammals and seabirds. One goal of field-based research efforts on this topic is the development of predictive models of HABs that would enable rapid response, mitigation and ultimately prevention of these events. In turn, these objectives are predicated on understanding the environmental conditions that stimulate these transient phenomena. An embedded sensor network (Fig. 1), under development in the San Pedro Shelf region off the Southern California coast, is providing tools for acquiring chemical, physical and biological data at high temporal and spatial resolution to help document the emergence and persistence of HAB events, supporting the design and testing of predictive models, and providing contextual information for experimental studies designed to reveal the environmental conditions promoting HABs. The sensor platforms contained within this network include pier-based sensor arrays, ocean moorings, HF radar stations, along with mobile sensor nodes in the form of surface and subsurface autonomous vehicles. FreewaveTM radio modems facilitate network communication and form a minimally-intrusive, wireless communication infrastructure throughout the Southern California coastal region, allowing rapid and cost-effective data transfer. An emerging focus of this project is the incorporation of a predictive ocean model that assimilates near-real time, in situ data from deployed Autonomous Underwater Vehicles (AUVs). The model then assimilates the data to increase the skill of both nowcasts and forecasts, thus providing insight into bloom initiation as well as the movement of blooms or other oceanic features of interest (e.g., thermoclines, fronts, river discharge, etc.). From these predictions, deployed mobile sensors can be tasked to track a designated feature. This focus has led to the creation of a technology chain in which algorithms are being implemented for the innovative trajectory design for AUVs. Such intelligent mission planning is required to maneuver a vehicle to precise depths and locations that are the sites of active blooms, or physical/chemical features that might be sources of bloom initiation or persistence. The embedded network yields high-resolution, temporal and spatial measurements of pertinent environmental parameters and resulting biology (see Fig. 1). Supplementing this with ocean current information and remotely sensed imagery and meteorological data, we obtain a comprehensive foundation for developing a fundamental understanding of HAB events. This then directs labor- intensive and costly sampling efforts and analyses. Additionally, we provide coastal municipalities, managers and state agencies with detailed information to aid their efforts in providing responsible environmental stewardship of their coastal waters.