881 resultados para Interaction modeling. Model-based development. Interaction evaluation.
Resumo:
The joint modeling of longitudinal and survival data is a new approach to many applications such as HIV, cancer vaccine trials and quality of life studies. There are recent developments of the methodologies with respect to each of the components of the joint model as well as statistical processes that link them together. Among these, second order polynomial random effect models and linear mixed effects models are the most commonly used for the longitudinal trajectory function. In this study, we first relax the parametric constraints for polynomial random effect models by using Dirichlet process priors, then three longitudinal markers rather than only one marker are considered in one joint model. Second, we use a linear mixed effect model for the longitudinal process in a joint model analyzing the three markers. In this research these methods were applied to the Primary Biliary Cirrhosis sequential data, which were collected from a clinical trial of primary biliary cirrhosis (PBC) of the liver. This trial was conducted between 1974 and 1984 at the Mayo Clinic. The effects of three longitudinal markers (1) Total Serum Bilirubin, (2) Serum Albumin and (3) Serum Glutamic-Oxaloacetic transaminase (SGOT) on patients' survival were investigated. Proportion of treatment effect will also be studied using the proposed joint modeling approaches. ^ Based on the results, we conclude that the proposed modeling approaches yield better fit to the data and give less biased parameter estimates for these trajectory functions than previous methods. Model fit is also improved after considering three longitudinal markers instead of one marker only. The results from analysis of proportion of treatment effects from these joint models indicate same conclusion as that from the final model of Fleming and Harrington (1991), which is Bilirubin and Albumin together has stronger impact in predicting patients' survival and as a surrogate endpoints for treatment. ^
Resumo:
This study aimed to develop and validate The Cancer Family Impact Scale (CFIS), an instrument for use in studies investigating relationships among family factors and colorectal cancer (CRC) screening when family history is a risk factor. We used existing data to develop the measure from 1,285 participants (637 families) across the United States who were in the Johns Hopkins Colon Cancer Genetic Testing study. Participants were 94% white with an average age of 50.1 years, and 60% were women. None had a personal CRC history, and eighty percent had 1 FDR with CRC and 20% had more than one FDR with CRC. The study had three aims: (1) to identify the latent factors underlying the CFIS via exploratory factor analysis (EFA); (2) to confirm the findings of the EFA via confirmatory factor analysis (CFA); and (3) to assess the reliability of the scale via Cronbach's alpha. Exploratory analyses were performed on a split half of the sample, and the final model was confirmed on the other half. The EFA suggested the CFIS was an 18-item measure with 5 latent constructs: (1) NEGATIVE: negative effects of cancer on the family; (2) POSITIVE: positive effects of cancer on the family; (3) COMMUNICATE: how families communicate about cancer; (4) FLOW: how information about cancer is conveyed in families; and (5) NORM: how individuals react to family norms about cancer. CFA on the holdout sample showed the CFIS to have a reasonably good fit (Chi-square = 389.977, df = 122, RMSEA= 0.058 (.052-.065), CFI=.902, TLI=.877, GF1=.939). The overall reliability of the scale was α=0.65. The reliability of the subscales was: (1) NEGATIVE α = 0.682; (2) POSITIVE α = 0.686; (3) COMMUNICATE α = 0.723; (4) FLOW α = 0.467; and (5) NORM α = 0.732. ^ We concluded the CFIS to be a good measure with most fit levels over 0.90. The CFIS could be used to compare theoretically driven hypotheses about the pathways through which family factors could influence health behavior among unaffected individuals at risk due to family history, and also aid in the development and evaluation of cancer prevention interventions including a family component. ^
Resumo:
The events of the 1990's and early 2000's demonstrated the need for effective planning and response to natural and man-made disasters. One of those potential natural disasters is pandemic flu. Once defined, the CDC stated that program, or plan, effectiveness is improved through the process of program evaluation. (Centers for Disease Control and Prevention, 1999) Program evaluation should be accomplished not only periodically, but in the course of routine administration of the program. (Centers for Disease Control and Prevention, 1999) Accomplishing this task for a "rare, but significant event" is challenging. (Herbold, John R., PhD., 2008) To address this challenge, the RAND Corporation (under contract to the CDC) developed the "Facilitated Look-Backs" approach that was tested and validated at the state level. (Aledort et al., 2006).^ Nevertheless, no comprehensive and generally applicable pandemic influenza program evaluation tool or model is readily found for use at the local public health department level. This project developed such a model based on the "Facilitated Look-Backs" approach developed by RAND Corporation. (Aledort et al., 2006) Modifications to the RAND model included stakeholder additions, inclusion of all six CDC program evaluation steps, and suggestions for incorporating pandemic flu response plans in seasonal flu management implementation. Feedback on the model was then obtained from three LPHD's—one rural, one suburban, and one urban. These recommendations were incorporated into the final model. Feedback from the sites also supported the assumption that this model promotes the effective and efficient evaluation of both pandemic flu and seasonal flu response by reducing redundant evaluations of pandemic flu plans, seasonal flu plans, and funding requirement accountability. Site feedback also demonstrated that the model is comprehensive and flexible, so it can be adapted and applied to different LPHD needs and settings. It also stimulates evaluation of the major issues associated with pandemic flu planning. ^ The next phase in evaluating this model should be to apply it in a program evaluation of one or more LPHD's seasonal flu response that incorporates pandemic flu response plans.^
Resumo:
Health departments, research institutions, policy-makers, and healthcare providers are often interested in knowing the health status of their clients/constituents. Without the resources, financially or administratively, to go out into the community and conduct health assessments directly, these entities frequently rely on data from population-based surveys to supply the information they need. Unfortunately, these surveys are ill-equipped for the job due to sample size and privacy concerns. Small area estimation (SAE) techniques have excellent potential in such circumstances, but have been underutilized in public health due to lack of awareness and confidence in applying its methods. The goal of this research is to make model-based SAE accessible to a broad readership using clear, example-based learning. Specifically, we applied the principles of multilevel, unit-level SAE to describe the geographic distribution of HPV vaccine coverage among females aged 11-26 in Texas.^ Multilevel (3 level: individual, county, public health region) random-intercept logit models of HPV vaccination (receipt of ≥ 1 dose Gardasil® ) were fit to data from the 2008 Behavioral Risk Factor Surveillance System (outcome and level 1 covariates) and a number of secondary sources (group-level covariates). Sampling weights were scaled (level 1) or constructed (levels 2 & 3), and incorporated at every level. Using the regression coefficients (and standard errors) from the final models, I simulated 10,000 datasets for each regression coefficient from the normal distribution and applied them to the logit model to estimate HPV vaccine coverage in each county and respective demographic subgroup. For simplicity, I only provide coverage estimates (and 95% confidence intervals) for counties.^ County-level coverage among females aged 11-17 varied from 6.8-29.0%. For females aged 18-26, coverage varied from 1.9%-23.8%. Aggregated to the state level, these values translate to indirect state estimates of 15.5% and 11.4%, respectively; both of which fall within the confidence intervals for the direct estimates of HPV vaccine coverage in Texas (Females 11-17: 17.7%, 95% CI: 13.6, 21.9; Females 18-26: 12.0%, 95% CI: 6.2, 17.7).^ Small area estimation has great potential for informing policy, program development and evaluation, and the provision of health services. Harnessing the flexibility of multilevel, unit-level SAE to estimate HPV vaccine coverage among females aged 11-26 in Texas counties, I have provided (1) practical guidance on how to conceptualize and conduct modelbased SAE, (2) a robust framework that can be applied to other health outcomes or geographic levels of aggregation, and (3) HPV vaccine coverage data that may inform the development of health education programs, the provision of health services, the planning of additional research studies, and the creation of local health policies.^
Resumo:
As the requirements for health care hospitalization have become more demanding, so has the discharge planning process become a more important part of the health services system. A thorough understanding of hospital discharge planning can, then, contribute to our understanding of the health services system. This study involved the development of a process model of discharge planning from hospitals. Model building involved the identification of factors used by discharge planners to develop aftercare plans, and the specification of the roles of these factors in the development of the discharge plan. The factors in the model were concatenated in 16 discrete decision sequences, each of which produced an aftercare plan.^ The sample for this study comprised 407 inpatients admitted to the M. D. Anderson Hospital and Tumor Institution at Houston, Texas, who were discharged to any site within Texas during a 15 day period. Allogeneic bone marrow donors were excluded from the sample. The factors considered in the development of discharge plans were recorded by discharge planners and were used to develop the model. Data analysis consisted of sorting the discharge plans using the plan development factors until for some combination and sequence of factors all patients were discharged to a single site. The arrangement of factors that led to that aftercare plan became a decision sequence in the model.^ The model constructs the same discharge plans as those developed by hospital staff for every patient in the study. Tests of the validity of the model should be extended to other patients at the MDAH, to other cancer hospitals, and to other inpatient services. Revisions of the model based on these tests should be of value in the management of discharge planning services and in the design and development of comprehensive community health services.^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
Women With IMPACT (WWI) is a community-based preconception care educational intervention. WWI is being implemented by the Impacting Maternal and Prenatal Care Together (IMPACT) Collaborative and targets zip codes in Harris County, Texas at high risk for infant mortality, low birthweight, and preterm birth. WWI started March 2012 and continues through August 2013. Three workshop series are planned. This study was conducted with participants and facilitators from the first workshop series. This study aimed to 1) evaluate the WWI program using empowerment evaluation, 2) engage all WWI stakeholders in an empowerment evaluation so the method could be adopted as a participatory evaluation process for future IMPACT activities, and 3) develop recommendations for sustainability of the WWI intervention, based on empowerment evaluation findings and results from the pre/post program evaluation completed by WWI participants. Study participants included WWI participants and facilitators and IMPACT Collaborative Steering Committee members. WWI participants were female, 18-35 year-old, non-pregnant residents of zip codes at high risk of adverse birth outcomes. All other study participants were 18 years or older. A two-phased empowerment evaluation (EE) was utilized in this study. Sessions 1-4 were conducted independently of one another – 3 with participants at different sites and one with the facilitators. The fifth session included WWI participant and facilitator representatives, and IMPACT Steering Committee members. Session 5 built upon the work of the other sessions. Observation notes were recorded during each session. Thematic content analysis was conducted on all EE tables and observation notes. Mission statements drafted by each group focused on improvement of physical and mental health through behavior change and empowerment of all participants. The top 5 overall program components were: physical activity, nutrition, self-worth, in-class communication, and stress. Goals for program improvement were set by EE participants for each of these components. Through thematic content analysis of the tables and observation notes, social support emerged as an important theme of the program among all participant groups. Change to a healthy lifestyle emerged as an important theme in terms of program improvement. Two-phased EE provided an opportunity for all program stakeholders to provide feedback regarding important program components and provide suggestions for program improvement. EE, thematic content analysis, pre/post evaluation results, and inherent program knowledge were triangulated to make recommendations to sustain the program once the initial funding ends. ^
Resumo:
Instrumental climate data are limited in length and only available with low spatial coverage before the middle of the 20th century. This is too short to reliably determine and interpret decadal and longer scale climate variability and to understand the underlying mechanisms with sufficient accuracy. A proper knowledge of past variability of the climate system is needed to assess the anthropogenic impact on climate and ecosystems, and also important with regard to long-range climate forecasting. Highly-resolved records of past climate variations that extend beyond pre-industrial times can significantly help to understand long-term climate changes and trends. Indirect information on past environmental and climatic conditions can be deduced from climate-sensitive proxies. Large colonies of massive growing tropical reef corals have been proven to sensitively monitor changes in ambient seawater. Rapid skeletal growth, typically ranging between several millimeters to centimeters per year, allows the development of proxy records at sub-seasonal resolution. Stable oxygen isotopic composition and trace elemental ratios incorporated in the aragonitic coral skeleton can reveal a detailed history of past environmental conditions, e.g., sea surface temperature (SST). In general, coral-based reconstructions from the tropical Atlantic region have lagged behind the extensive work published using coral records from the Indian and Pacific Oceans. Difficulties in the analysis of previously utilized coral archives from the Atlantic, typically corals of the genera Montastrea and Siderastrea, have so far exacerbated the production of long-term high-resolution proxy records. The objective of this study is the evaluation of massive fast-growing corals of the species Diploria strigosa as a new marine archive for climate reconstructions from the tropical Atlantic region. For this purpose, coral records from two study sites in the eastern Caribbean Sea (Guadeloupe, Lesser Antilles; and Archipelago Los Roques, Venezuela) were examined. At Guadeloupe, a century-long monthly resolved multi-proxy coral record was generated. Results present the first d18O (Sr/Ca)-SST calibration equations for the Atlantic braincoral Diploria strigosa, that are robust and consistent with previously published values using other coral species from different regions. Both proxies reflect local variability of SST on a sub-seasonal scale, which is a precondition for studying seasonally phase-locked climate variations, as well as track variability on a larger spatial scale (i.e., in the Caribbean and tropical North Atlantic). Coral Sr/Ca reliably records local annual to interannual temperature variations and is higher correlated to in-situ air temperature than to grid-SST. The warming calculated from coral Sr/Ca is concurrent with the strong surface temperature increase at the study site during the past decades. Proxy data show a close relationship to major climate signals from the tropical Pacific and North Atlantic (the El Niño Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO)) affecting the seasonal cycle of SST in the North Tropical Atlantic (NTA). Coral oxygen isotopes are also influenced by seawater d18O (d18Osw) which is linked to the hydrological cycle, and capture large-scale climate variability in the NTA region better than Sr/Ca. Results from a quantitative comparison between extreme events in the two most prominent modes of external forcing, namely the ENSO and NAO, and respective events recorded in seasonal coral d18O imply that SST variability at the study site is highly linked to Pacific and North Atlantic variability, by this means supporting the assumptions of observational- and model-based studies which suggest a strong impact of ENSO and NAO forcings onto the NTA region through a modulation of trade wind strength in winter. Results from different spectral analysis tools suggest that interannual climate variability recorded by the coral proxies is II largely dictated by Pacific ENSO forcing, whereas at decadal and longer timescales the influence of the NAO is dominan. tThe Archipelago Los Roques is situated in the southeastern Caribbean Sea, north of the Venezuelan coast. Year-to-year variations in monthly resolved coral d18O of a nearcentury- long Diploria strigosa record are significantly correlated with SST and show pronounced multidecadal variations. About half of the variance in coral d18O can be explained by variations in seawater d18O, which can be estimated by calculating the d18Oresidual via subtracting the SST component from measured coral d18O. The d18Oresidual and a regional precipitation index are highly correlated at low frequencies, suggesting that d18Osw variations are primarily atmospheric-driven. Warmer SSTs at Los Roques broadly coincide with higher precipitation in the southeastern Caribbean at multidecadal time scales, effectively strengthening the climate signal in the coral d18O record. The Los Roques coral d18O record displays a strong and statistically significant relationship to different indices of hurricane activity during the peak of the Atlantic hurricane season in boreal summer and is a particularly good indicator of decadal-multidecadal swings in the latter indices. In general, the detection of long-term changes and trends in Atlantic hurricane activity is hampered due to the limited length of the reliable instrumental record and the known inhomogeneity in the observational databases which result from changes in observing practice and technology over the years. The results suggest that coral-derived proxy data from Los Roques can be used to infer changes in past hurricane activity on timescales that extend well beyond the reliable record. In addition, the coral record exhibits a clear negative trend superimposed on the decadal to multidecadal cycles, indicating a significant warming and freshening of surface waters in the genesis region of tropical cyclones during the past decades. The presented coral d18O time series provides the first and, so far, longest continuous coral-based record of hurricane activity. It appears that the combination of both signals (SST and d18Osw) in coral d18O leads to an amplification of large-scale climate signals in the record, and makes coral d18O even a better proxy for hurricane activity than SST alone. Atlantic hurricane activity naturally exhibits strong multidecadal variations that are associated with the Atlantic Multidecadal Oscillation (AMO), the major mode of lowfrequency variability in the North Atlantic Ocean. However, the mechanisms underlying this multidecadal variability remain controversial, primarily because of the limited instrumental record. The Los Roques coral d18O displays strong multidecadal variability with a period of approximately 60 years that is closely related to the AMO, making the Archipelago Los Roques a very sensitive location for studying low-frequency climate variability in the Atlantic Ocean. In summary, the coral records presented in this thesis capture different key climate variables in the north tropical Atlantic region very well, indicating that fast-growing Diploria strigosa corals represent a promising marine archive for further proxy-based reconstructions of past climate variability on a range of time scales.
Resumo:
El artículo se centra en el estado incipiente del soporte teórico de la educación dada en entornos digitales. Con fines de contribuir con el desarrollo de dicho soporte, ofrece un modelo conceptual fundamentado en un enfoque constructivista sociocultural, como referente para el diseño, la evaluación y la investigación en el campo particular de la educación digital. El modelo es resultado de la ampliación justificada del triángulo interactivo propuesto por Coll (1996; 2004) para el análisis de las actividades educativas. Está constituido por cuatro grupos de factores: del contenido, del estudiante, del docente y de las condiciones técnico-ambientales del programa, permitiendo abordar sistemáticamente las particularidades de las actividades educativas dadas en entornos digitales. Sustentado en fuentes actualizadas, el artículo presenta y describe los factores más relevantes de cada grupo.
Resumo:
El artículo se centra en el estado incipiente del soporte teórico de la educación dada en entornos digitales. Con fines de contribuir con el desarrollo de dicho soporte, ofrece un modelo conceptual fundamentado en un enfoque constructivista sociocultural, como referente para el diseño, la evaluación y la investigación en el campo particular de la educación digital. El modelo es resultado de la ampliación justificada del triángulo interactivo propuesto por Coll (1996; 2004) para el análisis de las actividades educativas. Está constituido por cuatro grupos de factores: del contenido, del estudiante, del docente y de las condiciones técnico-ambientales del programa, permitiendo abordar sistemáticamente las particularidades de las actividades educativas dadas en entornos digitales. Sustentado en fuentes actualizadas, el artículo presenta y describe los factores más relevantes de cada grupo.
Resumo:
Lipids are used for the evaluation of the different organic matter contributions in the north eastern Norwegian sea (M23258 site; 75ºN, 14ºE) over the last 15,000 years. Development of a mass balance model based on the down core quantification of the C37 alkenones, the odd carbon numbered n-alkanes (Aodd) and the unresolved complex mixture of hydrocarbons (UCM) has allowed three main organic matter inputs involving marine, continental and ancient reworked organic matter to be recognized. The model shows a good agreement between measured and reconstructed TOC values. Similarly, a strong parallelism is observed between predicted components such as marine TOC and carbonate content (CaCO3), which was determined independently. Representation of the model results within a time-scale based on 15 AMS-14C measurements shows that the main changes in organic matter constituents are coincident with the major climatic events of the last 15,000 a. Thus, the predominance of reworked organic matter is characteristic of Termination Ia (up to 70%), continental organic matter was dominant during the Bølling-Allerød (B-A) and Younger Dryas (YD) periods (about 85%) and a strong increase of marine organic matter occurred in the Holocene (between 50 and 75%). This agreement reflects the main hydrographic changes that determined the deposition of sedimentary materials during the period studied: ice-rafted detritus from the Barents continental platform, ice-melting waters from the Arctic fluvial system discharging into the Barents sea and dominance of north Atlantic currents, respectively. In this respect, the high-resolution down core record resulting from the mass balance and lipid measurements allows the identification of millennial-scale events such as the increase of reworked organic matter at the final retreat of the Barents ice sheet at the end of the deglaciation period (Termination Ib).
Resumo:
El artículo se centra en el estado incipiente del soporte teórico de la educación dada en entornos digitales. Con fines de contribuir con el desarrollo de dicho soporte, ofrece un modelo conceptual fundamentado en un enfoque constructivista sociocultural, como referente para el diseño, la evaluación y la investigación en el campo particular de la educación digital. El modelo es resultado de la ampliación justificada del triángulo interactivo propuesto por Coll (1996; 2004) para el análisis de las actividades educativas. Está constituido por cuatro grupos de factores: del contenido, del estudiante, del docente y de las condiciones técnico-ambientales del programa, permitiendo abordar sistemáticamente las particularidades de las actividades educativas dadas en entornos digitales. Sustentado en fuentes actualizadas, el artículo presenta y describe los factores más relevantes de cada grupo.
Resumo:
Although sea-ice extent in the Bellingshausen-Amundsen (BA) seas sector of the Antarctic has shown significant decline over several decades, there is not enough data to draw any conclusion on sea-ice thickness and its change for the BA sector, or for the entire Southern Ocean. This paper presents our results of snow and ice thickness distributions from the SIMBA 2007 experiment in the Bellingshausen Sea, using four different methods (ASPeCt ship observations, downward-looking camera imaging, ship-based electromagnetic induction (EM) sounding, and in situ measurements using ice drills). A snow freeboard and ice thickness model generated from in situ measurements was then applied to contemporaneous ICESat (satellite laser altimetry) measured freeboard to derive ice thickness at the ICESat footprint scale. Errors from in situ measurements and from ICESat freeboard estimations were incorporated into the model, so a thorough evaluation of the model and uncertainty of the ice thickness estimation from ICESat are possible. Our results indicate that ICESat derived snow freeboard and ice thickness distributions (asymmetrical unimodal tailing to right) for first-year ice (0.29 ± 0.14 m for mean snow freeboard and 1.06 ± 0.40 m for mean ice thickness), multi-year ice (0.48 ± 0.26 and 1.59 ± 0.75 m, respectively), and all ice together (0.42 ± 0.24 and 1.38 ± 0.70 m, respectively) for the study area seem reasonable compared with those values from the in situ measurements, ASPeCt observations, and EM measurements. The EM measurements can act as an appropriate supplement for ASPeCt observations taken hourly from the ship's bridge and provide reasonable ice and snow distributions under homogeneous ice conditions. Our proposed approaches: (1) of using empirical equations relating snow freeboard to ice thickness based on in situ measurements and (2) of using isostatic equations that replace snow depth with snow freeboard (or empirical equations that convert freeboard to snow depth), are efficient and important ways to derive ice thickness from ICESat altimetry at the footprint scale for Antarctic sea ice. Spatial and temporal snow and ice thickness from satellite altimetry for the BA sector and for the entire Southern Ocean is therefore possible.
Resumo:
China’s huge domestic market is constantly expanding, and is low-end demand oriented and highly dispersed. The domestic market-based development of China’s industrial cluster, however, is not only a quantitative expansion, but has also been accompanied with remarkable qualitative upgrading. Specialized markets are a microcosm that clearly indicate this paradoxical phenomenon. By analyzing three typical cases of industrial clusters that have specialized markets, this paper will make the case that under modern China’s market conditions, the local public sector is the crucial driving force for upgrading industrial clusters, which organize complicated transactions, promote quality control, and stimulate the division of labor.
Resumo:
Abstract Air pollution is a big threat and a phenomenon that has a specific impact on human health, in addition, changes that occur in the chemical composition of the atmosphere can change the weather and cause acid rain or ozone destruction. Those are phenomena of global importance. The World Health Organization (WHO) considerates air pollution as one of the most important global priorities. Salamanca, Gto., Mexico has been ranked as one of the most polluted cities in this country. The industry of the area led to a major economic development and rapid population growth in the second half of the twentieth century. The impact in the air quality is important and significant efforts have been made to measure the concentrations of pollutants. The main pollution sources are locally based plants in the chemical and power generation sectors. The registered concerning pollutants are Sulphur Dioxide (SO2) and particles on the order of ∼10 micrometers or less (PM10). The prediction in the concentration of those pollutants can be a powerful tool in order to take preventive measures such as the reduction of emissions and alerting the affected population. In this PhD thesis we propose a model to predict concentrations of pollutants SO2 and PM10 for each monitoring booth in the Atmospheric Monitoring Network Salamanca (REDMAS - for its spanish acronym). The proposed models consider the use of meteorological variables as factors influencing the concentration of pollutants. The information used along this work is the current real data from REDMAS. In the proposed model, Artificial Neural Networks (ANN) combined with clustering algorithms are used. The type of ANN used is the Multilayer Perceptron with a hidden layer, using separate structures for the prediction of each pollutant. The meteorological variables used for prediction were: Wind Direction (WD), wind speed (WS), Temperature (T) and relative humidity (RH). Clustering algorithms, K-means and Fuzzy C-means, are used to find relationships between air pollutants and weather variables under consideration, which are added as input of the RNA. Those relationships provide information to the ANN in order to obtain the prediction of the pollutants. The results of the model proposed in this work are compared with the results of a multivariate linear regression and multilayer perceptron neural network. The evaluation of the prediction is calculated with the mean absolute error, the root mean square error, the correlation coefficient and the index of agreement. The results show the importance of meteorological variables in the prediction of the concentration of the pollutants SO2 and PM10 in the city of Salamanca, Gto., Mexico. The results show that the proposed model perform better than multivariate linear regression and multilayer perceptron neural network. The models implemented for each monitoring booth have the ability to make predictions of air quality that can be used in a system of real-time forecasting and human health impact analysis. Among the main results of the development of this thesis we can cite: A model based on artificial neural network combined with clustering algorithms for prediction with a hour ahead of the concentration of each pollutant (SO2 and PM10) is proposed. A different model was designed for each pollutant and for each of the three monitoring booths of the REDMAS. A model to predict the average of pollutant concentration in the next 24 hours of pollutants SO2 and PM10 is proposed, based on artificial neural network combined with clustering algorithms. Model was designed for each booth of the REDMAS and each pollutant separately. Resumen La contaminación atmosférica es una amenaza aguda, constituye un fenómeno que tiene particular incidencia sobre la salud del hombre. Los cambios que se producen en la composición química de la atmósfera pueden cambiar el clima, producir lluvia ácida o destruir el ozono, fenómenos todos ellos de una gran importancia global. La Organización Mundial de la Salud (OMS) considera la contaminación atmosférica como una de las más importantes prioridades mundiales. Salamanca, Gto., México; ha sido catalogada como una de las ciudades más contaminadas en este país. La industria de la zona propició un importante desarrollo económico y un crecimiento acelerado de la población en la segunda mitad del siglo XX. Las afectaciones en el aire son graves y se han hecho importantes esfuerzos por medir las concentraciones de los contaminantes. Las principales fuentes de contaminación son fuentes fijas como industrias químicas y de generación eléctrica. Los contaminantes que se han registrado como preocupantes son el Bióxido de Azufre (SO2) y las Partículas Menores a 10 micrómetros (PM10). La predicción de las concentraciones de estos contaminantes puede ser una potente herramienta que permita tomar medidas preventivas como reducción de emisiones a la atmósfera y alertar a la población afectada. En la presente tesis doctoral se propone un modelo de predicción de concentraci ón de los contaminantes más críticos SO2 y PM10 para cada caseta de monitorización de la Red de Monitorización Atmosférica de Salamanca (REDMAS). Los modelos propuestos plantean el uso de las variables meteorol ógicas como factores que influyen en la concentración de los contaminantes. La información utilizada durante el desarrollo de este trabajo corresponde a datos reales obtenidos de la REDMAS. En el Modelo Propuesto (MP) se aplican Redes Neuronales Artificiales (RNA) combinadas con algoritmos de agrupamiento. La RNA utilizada es el Perceptrón Multicapa con una capa oculta, utilizando estructuras independientes para la predicción de cada contaminante. Las variables meteorológicas disponibles para realizar la predicción fueron: Dirección de Viento (DV), Velocidad de Viento (VV), Temperatura (T) y Humedad Relativa (HR). Los algoritmos de agrupamiento K-means y Fuzzy C-means son utilizados para encontrar relaciones existentes entre los contaminantes atmosféricos en estudio y las variables meteorológicas. Dichas relaciones aportan información a las RNA para obtener la predicción de los contaminantes, la cual es agregada como entrada de las RNA. Los resultados del modelo propuesto en este trabajo son comparados con los resultados de una Regresión Lineal Multivariable (RLM) y un Perceptrón Multicapa (MLP). La evaluación de la predicción se realiza con el Error Medio Absoluto, la Raíz del Error Cuadrático Medio, el coeficiente de correlación y el índice de acuerdo. Los resultados obtenidos muestran la importancia de las variables meteorológicas en la predicción de la concentración de los contaminantes SO2 y PM10 en la ciudad de Salamanca, Gto., México. Los resultados muestran que el MP predice mejor la concentración de los contaminantes SO2 y PM10 que los modelos RLM y MLP. Los modelos implementados para cada caseta de monitorizaci ón tienen la capacidad para realizar predicciones de calidad del aire, estos modelos pueden ser implementados en un sistema que permita realizar la predicción en tiempo real y analizar el impacto en la salud de la población. Entre los principales resultados obtenidos del desarrollo de esta tesis podemos citar: Se propone un modelo basado en una red neuronal artificial combinado con algoritmos de agrupamiento para la predicción con una hora de anticipaci ón de la concentración de cada contaminante (SO2 y PM10). Se diseñó un modelo diferente para cada contaminante y para cada una de las tres casetas de monitorización de la REDMAS. Se propone un modelo de predicción del promedio de la concentración de las próximas 24 horas de los contaminantes SO2 y PM10, basado en una red neuronal artificial combinado con algoritmos de agrupamiento. Se diseñó un modelo para cada caseta de monitorización de la REDMAS y para cada contaminante por separado.