972 resultados para logistic model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The study objectives were to determine risk factors for preterm labor (PTL) in Colorado Springs, CO, with emphasis on altitude and psychosocial factors, and to develop a model that identifies women at high risk for PTL. Three hundred and thirty patients with PTL were matched to 460 control patients without PTL using insurance category as an indirect measure of social class. Data were gathered by patient interview and review of medical records. Seven risk groups were compared: (1) Altitude change and travel; (2) Psychosocial ((a) child, sexual, spouse, alcohol and drug abuse; (b) neuroses and psychoses; (c) serious accidents and injuries; (d) broken home (maternal parental separation); (e) assault (physical and sexual); and (f) stress (emotional, domestic, occupational, financial and general)); (3) demographic; (4) maternal physical condition; (5) Prenatal care; (6) Behavioral risks; and (7) Medical factors. Analysis was by logistic regression. Results demonstrated altitude change before or after conception and travel during pregnancy to be non-significant, even after adjustment for potential confounding variables. Five significant psychosocial risk factors were determined: Maternal sex abuse (p = 0.006), physical assault (p = 0.025), nervous breakdown (p = 0.011), past occupational injury (p = 0.016), and occupational stress (p = 0.028). Considering all seven risk groups in the logistic regression, we chose a logistic model with 11 risk factors. Two risk factors were psychosocial (maternal spouse abuse and past occupational injury), 1 was pertinent to maternal physical condition ($\le$130 lbs. pre-pregnancy weight), 1 to prenatal care ($\le$10 prenatal care visits), 2 pertinent to behavioral risks ($>$15 cigarettes per day and $\le$30 lbs. weight gain) and 5 medical factors (abnormal genital culture, previous PTB, primiparity, vaginal bleeding and vaginal discharge). We conclude that altitude change is not a risk factor for PTL and that selected psychosocial factors are significant risk factors for PTL. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A case-control study has been conducted examining the relationship between preterm birth and occupational physical activity among U.S. Army enlisted gravidas from 1981 to 1984. The study includes 604 cases (37 or less weeks gestation) and 6,070 controls (greater than 37 weeks gestation) treated at U.S. Army medical treatment facilities worldwide. Occupational physical activity was measured using existing physical demand ratings of military occupational specialties.^ A statistically significant trend of preterm birth with increasing physical demand level was found (p = 0.0056). The relative risk point estimates for the two highest physical demand categories were statistically significant, RR's = 1.69 (p = 0.02) and 1.75 (p = 0.01), respectively. Six of eleven additional variables were also statistically significant predictors of preterm birth: age (less than 20), race (non-white), marital status (single, never married), paygrade (E1 - E3), length of military service (less than 2 years), and aptitude score (less than 100).^ Multivariate analyses using the logistic model resulted in three statistically significant risk factors for preterm birth: occupational physical demand; lower paygrade; and non-white race. Controlling for race and paygrade, the two highest physical demand categories were again statistically significant with relative risk point estimates of 1.56 and 1.70, respectively. The population attributable risk for military occupational physical demand was 26%, adjusted for paygrade and race; 17.5% of the preterm births were attributable to the two highest physical demand categories. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The relationship between serum cholesterol and cancer incidence was investigated in the population of the Hypertension Detection and Follow-up Program (HDFP). The HDFP was a multi-center trial designed to test the effectiveness of a stepped program of medication in reducing mortality associated with hypertension. Over 10,000 participants, ages 30-69, were followed with clinic and home visits for a minimum of five years. Cancer incidence was ascertained from existing study documents, which included hospitalization records, autopsy reports and death certificates. During the five years of follow-up, 286 new cancer cases were documented. The distribution of sites and total number of cases were similar to those predicted using rates from the Third National Cancer Survey. A non-fasting baseline serum cholesterol level was available for most participants. Age, sex, and race specific five-year cancer incidence rates were computed for each cholesterol quartile. Rates were also computed by smoking status, education status, and percent ideal weight quartiles. In addition, these and other factors were investigated with the use of the multiple logistic model.^ For all cancers combined, a significant inverse relationship existed between baseline serum cholesterol levels and cancer incidence. Previously documented associations between smoking, education and cancer were also demonstrated but did not account for the relationship between serum cholesterol and cancer. The relationship was more evident in males than females but this was felt to represent the different distribution of occurrence of specific cancer sites in the two sexes. The inverse relationship existed for all specific sites investigated (except breast) although a level of statistical significance was reached only for prostate carcinoma. Analyses after exclusion of cases diagnosed during the first two years of follow-up still yielded an inverse relationship. Life table analysis indicated that competing risks during the period of follow-up did not account for the existence of an inverse relationship. It is concluded that a weak inverse relationship does exist between serum cholesterol for many but not all cancer sites. This relationship is not due to confounding by other known cancer risk factors, competing risks or persons entering the study with undiagnosed cancer. Not enough information is available at the present time to determine whether this relationship is causal and further research is suggested. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective The neurodevelopmental–neurodegenerative debate is a basic issue in the field of the neuropathological basis of schizophrenia (SCH). Neurophysiological techniques have been scarcely involved in such debate, but nonlinear analysis methods may contribute to it. Methods Fifteen patients (age range 23–42 years) matching DSM IV-TR criteria for SCH, and 15 sex- and age-matched control subjects (age range 23–42 years) underwent a resting-state magnetoencephalographic evaluation and Lempel–Ziv complexity (LZC) scores were calculated. Results Regression analyses indicated that LZC values were strongly dependent on age. Complexity scores increased as a function of age in controls, while SCH patients exhibited a progressive reduction of LZC values. A logistic model including LZC scores, age and the interaction of both variables allowed the classification of patients and controls with high sensitivity and specificity. Conclusions Results demonstrated that SCH patients failed to follow the “normal” process of complexity increase as a function of age. In addition, SCH patients exhibited a significant reduction of complexity scores as a function of age, thus paralleling the pattern observed in neurodegenerative diseases. Significance Our results support the notion of a progressive defect in SCH, which does not contradict the existence of a basic neurodevelopmental alteration. Highlights ► Schizophrenic patients show higher complexity values as compared to controls. ► Schizophrenic patients showed a tendency to reduced complexity values as a function of age while controls showed the opposite tendency. ► The tendency observed in schizophrenic patients parallels the tendency observed in Alzheimer disease patients.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los incendios forestales son la principal causa de mortalidad de árboles en la Europa mediterránea y constituyen la amenaza más seria para los ecosistemas forestales españoles. En la Comunidad Valenciana, diariamente se despliega cerca de un centenar de vehículos de vigilancia, cuya distribución se apoya, fundamentalmente, en un índice de riesgo de incendios calculado en función de las condiciones meteorológicas. La tesis se centra en el diseño y validación de un nuevo índice de riesgo integrado de incendios, especialmente adaptado a la región mediterránea y que facilite el proceso de toma de decisiones en la distribución diaria de los medios de vigilancia contra incendios forestales. El índice adopta el enfoque de riesgo integrado introducido en la última década y que incluye dos componentes de riesgo: el peligro de ignición y la vulnerabilidad. El primero representa la probabilidad de que se inicie un fuego y el peligro potencial para que se propague, mientras que la vulnerabilidad tiene en cuenta las características del territorio y los efectos potenciales del fuego sobre el mismo. Para el cálculo del peligro potencial se han identificado indicadores relativos a los agentes naturales y humanos causantes de incendios, la ocurrencia histórica y el estado de los combustibles, extremo muy relacionado con la meteorología y las especies. En cuanto a la vulnerabilidad se han empleado indicadores representativos de los efectos potenciales del incendio (comportamiento del fuego, infraestructuras de defensa), como de las características del terreno (valor, capacidad de regeneración…). Todos estos indicadores constituyen una estructura jerárquica en la que, siguiendo las recomendaciones de la Comisión europea para índices de riesgo de incendios, se han incluido indicadores representativos del riesgo a corto plazo y a largo plazo. El cálculo del valor final del índice se ha llevado a cabo mediante la progresiva agregación de los componentes que forman cada uno de los niveles de la estructura jerárquica del índice y su integración final. Puesto que las técnicas de decisión multicriterio están especialmente orientadas a tratar con problemas basados en estructuras jerárquicas, se ha aplicado el método TOPSIS para obtener la integración final del modelo. Se ha introducido en el modelo la opinión de los expertos, mediante la ponderación de cada uno de los componentes del índice. Se ha utilizado el método AHP, para obtener las ponderaciones de cada experto y su integración en un único peso por cada indicador. Para la validación del índice se han empleado los modelos de Ecuaciones de Estimación Generalizadas, que tienen en cuenta posibles respuestas correlacionadas. Para llevarla a cabo se emplearon los datos de oficiales de incendios ocurridos durante el período 1994 al 2003, referenciados a una cuadrícula de 10x10 km empleando la ocurrencia de incendios y su superficie, como variables dependientes. Los resultados de la validación muestran un buen funcionamiento del subíndice de peligro de ocurrencia con un alto grado de correlación entre el subíndice y la ocurrencia, un buen ajuste del modelo logístico y un buen poder discriminante. Por su parte, el subíndice de vulnerabilidad no ha presentado una correlación significativa entre sus valores y la superficie de los incendios, lo que no descarta su validez, ya que algunos de sus componentes tienen un carácter subjetivo, independiente de la superficie incendiada. En general el índice presenta un buen funcionamiento para la distribución de los medios de vigilancia en función del peligro de inicio. No obstante, se identifican y discuten nuevas líneas de investigación que podrían conducir a una mejora del ajuste global del índice. En concreto se plantea la necesidad de estudiar más profundamente la aparente correlación que existe en la provincia de Valencia entre la superficie forestal que ocupa cada cuadrícula de 10 km del territorio y su riesgo de incendios y que parece que a menor superficie forestal, mayor riesgo de incendio. Otros aspectos a investigar son la sensibilidad de los pesos de cada componente o la introducción de factores relativos a los medios potenciales de extinción en el subíndice de vulnerabilidad. Summary Forest fires are the main cause of tree mortality in Mediterranean Europe and the most serious threat to the Spanisf forest. In the Spanish autonomous region of Valencia, forest administration deploys a mobile fleet of 100 surveillance vehicles in forest land whose allocation is based on meteorological index of wildlandfire risk. This thesis is focused on the design and validation of a new Integrated Wildland Fire Risk Index proposed to efficient allocation of vehicles and specially adapted to the Mediterranean conditions. Following the approaches of integrated risk developed last decade, the index includes two risk components: Wildland Fire Danger and Vulnerability. The former represents the probability a fire ignites and the potential hazard of fire propagation or spread danger, while vulnerability accounts for characteristics of the land and potential effects of fire. To calculate the Wildland Fire Danger, indicators of ignition and spread danger have been identified, including human and natural occurrence agents, fuel conditions, historical occurrence and spread rate. Regarding vulnerability se han empleado indicadores representativos de los efectos potenciales del incendio (comportamiento del fuego, infraestructurasd de defensa), como de las características del terreno (valor, capacidad de regeneración…). These indicators make up the hierarchical structure for the index, which, following the criteria of the European Commission both short and long-term indicators have been included. Integration consists of the progressive aggregation of the components that make up every level in risk the index and, after that, the integration of these levels to obtain a unique value for the index. As Munticriteria methods are oriented to deal with hierarchically structured problems and with situations in which conflicting goals prevail, TOPSIS method is used in the integration of components. Multicriteria methods were also used to incorporate expert opinion in weighting of indicators and to carry out the aggregation process into the final index. The Analytic Hierarchy Process method was used to aggregate experts' opinions on each component into a single value. Generalized Estimation Equations, which account for possible correlated responses, were used to validate the index. Historical records of daily occurrence for the period from 1994 to 2003, referred to a 10x10-km-grid cell, as well as the extent of the fires were the dependant variables. The results of validation showed good Wildland Fire Danger component performance, with high correlation degree between Danger and occurrence, a good fit of the logistic model used and a good discrimination power. The vulnerability component has not showed a significant correlation between their values and surface fires, which does not mean the index is not valid, because of the subjective character of some of its components, independent of the surface of the fires. Overall, the index could be used to optimize the preventing resources allocation. Nevertheless, new researching lines are identified and discussed to improve the overall performance of the index. More specifically the need of study the inverse relationship between the value of the wildfire Fire Danger component and the forested surface of each 10 - km cell is set out. Other points to be researched are the sensitivity of the index component´s weight and the possibility of taking into account indicators related to fire fighting resources to make up the vulnerability component.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A software for simulation of bruise occurrence in fruit grading lines, SIMLIN 2.0, is presented. Examples of application are included on the simulation of handling Sudanell peaches. SIMLIN 2.0 provides algorithms for the selection of logistic bruise prediction models adjusted on the basis of user designed laboratory tests. Handled fruits are characterised for simulation by means of statistical features on the independent variables of the logistic model. SIMLIN 2.0 allows to display different line designs establishing their aggressiveness from internal data bases. Aggressiveness is characterised in terms of data gathered with electronic products IS-100 type. The software provides graphical outputs which enable decision making on the improvement strategies of the lines and the selection of the product to be handled.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An extension of guarantees related to rainfall-related risks in the insurance of processing tomato crops has been accompanied with a large increase in claims in Western Spain, suggesting that damages may have been underestimated in previous years. A database was built by linking agricultural insurance records, meteorological data from local weather stations, and topographic data. The risk of rainfall-related damages in processing tomato in the Extremenian Guadiana river basin (W Spain) was studied using a logistic model. Risks during the growth of the crop and at harvesting were modelled separately. First, the risk related to rainfall was modelled as a function of meteorological, terrain and management variables. The resulting models were used to identify the variables responsible for rainfall-related damages, with a view to assess the potential impact of extending insurance coverage, and to develop an index to express the suitability of the cropping system for insurance. The analyses reveal that damages at different stages of crop development correspond to different hazards. The geographic dependence of the risk influences the scale at which the model might have validity, which together with the year dependency, the possibility of implementing index based insurances is questioned.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El planteamiento tradicional de análisis de la accidentalidad en carretera pasa por la consideración de herramientas paliativas, como son la identificación y gestión de los puntos negros o tramos de concentración de accidentes, o preventivas, como las auditorías e inspecciones de seguridad vial. En esta tesis doctoral se presenta un planteamiento complementario a estas herramientas, desde una perspectiva novedosa: la consideración de los tramos donde no se producen accidentes; son los denominados Tramos Blancos. La tesis persigue demostrar que existen determinados parámetros del diseño de las carreteras y del tráfico que, bajo características generales similares de las vías, tienen influencia en el hecho de que se produzcan o no accidentes, adicionalmente a la exposición al riesgo, como factor principal, y a otros factores. La propia definición de los Tramos Blancos, entendidos como tramos de carreteras de longitud representativa donde no se han producido accidentes con víctimas mortales o heridos graves durante un periodo largo de tiempo, garantiza que esta situación no se produzca como consecuencia de la aleatoriedad de los accidentes, sino que pudiera deberse a una confluencia específica de determinados parámetros de la geometría de la vía y del tráfico total y de vehículos pesados. Para el desarrollo de esta investigación se han considerado la red de autopistas de peaje y las carreteras convencionales de la Red del Estado de España, que supone un total de 17.000 kilómetros, y los datos de accidentes con víctimas mortales y heridos graves en el periodo 2006-2010, ambos incluidos, en estas redes (un total de 10.000 accidentes). La red viaria objeto de análisis supone el 65% de la longitud de la Red de Carreteras del Estado, por la que circula el 33% de su tráfico; en ella se produjeron en el año 2013 el 47% de los accidentes con víctimas y el 60% de las víctimas mortales de la Red de Carreteras del Estado. Durante la investigación se ha desarrollado una base de datos de 250.130 registros y más de 3.5 millones de datos en el caso de las autopistas de peaje de la Red de Carreteras del Estado y de 935.402 registros y más de 14 millones de datos en el caso de la red convencional del Estado analizada. Tanto las autopistas de peaje como las carreteras convencionales han sido clasificadas según sus características de tráfico, de manera que se valoren vías con nivel de exposición al riesgo similar. Para cada tipología de vía, se ha definido como longitud de referencia para que un tramo se considere Tramo Blanco la longitud igual al percentil 95 de las longitudes de tramos sin accidentes con heridos graves o víctimas mortales durante el periodo 2006-2010. En el caso de las autopistas de peaje, en la tipología que ha sido considerada para la definición del modelo, esta longitud de referencia se estableció en 14.5 kilómetros, mientras que en el caso de las carreteras convencionales, se estableció en 7.75 kilómetros. Para cada uno de los tipos de vía considerados se han construido una base de datos en la que se han incluido las variables de existencia o no de Tramo Blanco, así como las variables de tráfico (intensidad media diaria total, intensidad de vehículos pesados y porcentaje de vehículos pesados ), la velocidad media y las variables de geometría (número de carriles, ancho de carril, ancho de arcén derecho e izquierdo, ancho de calzada y plataforma, radio, peralte, pendiente y visibilidad directa e inversa en los casos disponibles); como variables adicionales, se han incluido el número de accidentes con víctimas, los fallecidos y heridos graves, índices de peligrosidad, índices de mortalidad y exposición al riesgo. Los trabajos desarrollados para explicar la presencia de Tramos Blancos en la red de autopistas de peaje han permitido establecer las diferencias entre los valores medios de las variables de tráfico y diseño geométrico en Tramos Blancos respecto a tramos no blancos y comprobar que estas diferencias son significativas. Así mismo, se ha podido calibrar un modelo de regresión logística que explica parcialmente la existencia de Tramos Blancos, para rangos de tráfico inferiores a 10.000 vehículos diarios y para tráficos entre 10.000 y 15.000 vehículos diarios. Para el primer grupo (menos de 10.000 vehículos al día), las variables que han demostrado tener una mayor influencia en la existencia de Tramo Blanco son la velocidad media de circulación, el ancho de carril, el ancho de arcén izquierdo y el porcentaje de vehículos pesados. Para el segundo grupo (entre 10.000 y 15.000 vehículos al día), las variables independientes más influyentes en la existencia de Tramo Blanco han sido la velocidad de circulación, el ancho de calzada y el porcentaje de vehículos pesados. En el caso de las carreteras convencionales, los diferentes análisis realizados no han permitido identificar un modelo que consiga una buena clasificación de los Tramos Blancos. Aun así, se puede afirmar que los valores medios de las variables de intensidad de tráfico, radio, visibilidad, peralte y pendiente presentan diferencias significativas en los Tramos Blancos respecto a los no blancos, que varían en función de la intensidad de tráfico. Los resultados obtenidos deben considerarse como la conclusión de un análisis preliminar, dado que existen otros parámetros, tanto de diseño de la vía como de la circulación, el entorno, el factor humano o el vehículo que podrían tener una influencia en el hecho que se analiza, y no se han considerado por no disponer de esta información. En esta misma línea, el análisis de las circunstancias que rodean al viaje que el usuario de la vía realiza, su tipología y motivación es una fuente de información de interés de la que no se tienen datos y que permitiría mejorar el análisis de accidentalidad en general, y en particular el de esta investigación. Adicionalmente, se reconocen limitaciones en el desarrollo de esta investigación, en las que sería preciso profundizar en el futuro, reconociendo así nuevas líneas de investigación de interés. The traditional approach to road accidents analysis has been based in the use of palliative tools, such as black spot (or road sections) identification and management, or preventive tools, such as road safety audits and inspections. This thesis shows a complementary approach to the existing tools, from a new perspective: the consideration of road sections where no accidents have occurred; these are the so-called White Road Sections. The aim of this thesis is to show that there are certain design parameters and traffic characteristics which, under similar circumstances for roads, have influence in the fact that accidents occur, in addition to the main factor, which is the risk exposure, and others. White Road Sections, defined as road sections of a representative length, where no fatal accidents or accidents involving serious injured have happened during a long period of time, should not be a product of randomness of accidents; on the contrary, they might be the consequence of a confluence of specific parameters of road geometry, traffic volumes and heavy vehicles traffic volumes. For this research, the toll motorway network and single-carriageway network of the Spanish National Road Network have been considered, which is a total of 17.000 kilometers; fatal accidents and those involving serious injured from the period 2006-2010 have been considered (a total number of 10.000 accidents). The road network covered means 65% of the total length of the National Road Network, which allocates 33% of traffic volume; 47% of accidents with victims and 60% of fatalities happened in these road networks during 2013. During the research, a database of 250.130 registers and more than 3.5 million data for toll motorways and 935.042 registers and more than 14 million data for single carriageways of the National Road Network was developed. Both toll motorways and single-carriageways have been classified according to their traffic characteristics, so that the analysis is performed over roads with similar risk exposure. For each road type, a reference length for White Road Section has been defined, as the 95 percentile of all road sections lengths without accidents (with fatalities or serious injured) for 2006-2010. For toll motorways, this reference length concluded to be 14.5 kilometers, while for single-carriageways, it was defined as 7.75 kilometers. A detailed database was developed for each type of road, including the variable “existence of White Road Section”, as well as variables of traffic (average daily traffic volume, heavy vehicles average daily traffic and percentage of heavy vehicles from the total traffic volume), average speed and geometry variables (number of lanes, width of lane, width of shoulders, carriageway width, platform width, radius, superelevation, slope and visibility); additional variables, such as number of accidents with victims, number of fatalities or serious injured, risk and fatality rates and risk exposure, have also been included. Research conducted for the explanation of the presence of White Road Sections in the toll motorway network have shown statistically significant differences in the average values of variables of traffic and geometric design in White Road Sections compared with other road sections. In addition, a binary logistic model for the partial explanation of the presence of White Road Sections was developed, for traffic volumes lower than 10.000 daily vehicles and for those running from 10.000 to 15.000 daily vehicles. For the first group, the most influent variables for the presence of White Road Sections were the average speed, width of lane, width of left shoulder and percentage of heavy vehicles. For the second group, the most influent variables were found to be average speed, carriageway width and percentage of heavy vehicles. For single-carriageways, the different analysis developed did not reach a proper model for the explanation of White Road Sections. However, it can be assumed that the average values of the variables of traffic volume, radius, visibility, superelevation and slope show significant differences in White Road Sections if compared with others, which also vary with traffic volumes. Results obtained should be considered as a conclusion of a preliminary analysis, as there are other parameters, not only design-related, but also regarding traffic, environment, human factor and vehicle which could have an influence in the fact under research, but this information has not been considered in the analysis, as it was not available. In parallel, the analysis of the circumstances around the trip, including its typology and motivation is an interesting source of information, from which data are not available; the availability of this information would be useful for the improvement of accident analysis, in general, and for this research work, in particular. In addition, there are some limitations in the development of the research work; it would be necessary to develop an in-depth analysis in the future, thus assuming new research lines of interest.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study focuses on the relationship between CO2 production and the ultimate hatchability of the incubation. A total amount of 43316 eggs of red-legged partridge (Alectoris rufa) were supervised during five actual incubations: three in 2012 and two in 2013. The CO2 concentration inside the incubator was monitored over a 20-day period, showing sigmoidal growth from ambient level (428 ppm) up to 1700 ppm in the incubation with the highest hatchability. Two sigmoid growth models (logistic and Gompertz) were used to describe the CO2 production by the eggs, with the result that the logistic model was a slightly better fit (r2=0.976 compared to r2=0.9746 for Gompertz). A coefficient of determination of 0.997 between the final CO2 estimation (ppm) using the logistic model and hatchability (%) was found.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sob as condições presentes de competitividade global, rápido avanço tecnológico e escassez de recursos, a inovação tornou-se uma das abordagens estratégicas mais importantes que uma organização pode explorar. Nesse contexto, a capacidade de inovação da empresa enquanto capacidade de engajar-se na introdução de novos processos, produtos ou ideias na empresa, é reconhecida como uma das principais fontes de crescimento sustentável, efetividade e até mesmo sobrevivência para as organizações. No entanto, apenas algumas empresas compreenderam na prática o que é necessário para inovar com sucesso e a maioria enxerga a inovação como um grande desafio. A realidade não é diferente no caso das empresas brasileiras e em particular das Pequenas e Médias Empresas (PMEs). Estudos indicam que o grupo das PMEs particularmente demonstra em geral um déficit ainda maior na capacidade de inovação. Em resposta ao desafio de inovar, uma ampla literatura emergiu sobre vários aspectos da inovação. Porém, ainda considere-se que há poucos resultados conclusivos ou modelos compreensíveis na pesquisa sobre inovação haja vista a complexidade do tema que trata de um fenômeno multifacetado impulsionado por inúmeros fatores. Além disso, identifica-se um hiato entre o que é conhecido pela literatura geral sobre inovação e a literatura sobre inovação nas PMEs. Tendo em vista a relevância da capacidade de inovação e o lento avanço do seu entendimento no contexto das empresas de pequeno e médio porte cujas dificuldades para inovar ainda podem ser observadas, o presente estudo se propôs identificar os determinantes da capacidade de inovação das PMEs a fim de construir um modelo de alta capacidade de inovação para esse grupo de empresas. O objetivo estabelecido foi abordado por meio de método quantitativo o qual envolveu a aplicação da análise de regressão logística binária para analisar, sob a perspectiva das PMEs, os 15 determinantes da capacidade de inovação identificados na revisão da literatura. Para adotar a técnica de análise de regressão logística, foi realizada a transformação da variável dependente categórica em binária, sendo grupo 0 denominado capacidade de inovação sem destaque e grupo 1 definido como capacidade de inovação alta. Em seguida procedeu-se com a divisão da amostra total em duas subamostras sendo uma para análise contendo 60% das empresas e a outra para validação (holdout) com os 40% dos casos restantes. A adequação geral do modelo foi avaliada por meio das medidas pseudo R2 (McFadden), chi-quadrado (Hosmer e Lemeshow) e da taxa de sucesso (matriz de classificação). Feita essa avaliação e confirmada a adequação do fit geral do modelo, foram analisados os coeficientes das variáveis incluídas no modelo final quanto ao nível de significância, direção e magnitude. Por fim, prosseguiu-se com a validação do modelo logístico final por meio da análise da taxa de sucesso da amostra de validação. Por meio da técnica de análise de regressão logística, verificou-se que 4 variáveis apresentaram correlação positiva e significativa com a capacidade de inovação das PMEs e que, portanto diferenciam as empresas com capacidade de inovação alta das empresas com capacidade de inovação sem destaque. Com base nessa descoberta, foi criado o modelo final de alta capacidade de inovação para as PMEs composto pelos 4 determinantes: base de conhecimento externo (externo), capacidade de gestão de projetos (interno), base de conhecimento interno (interno) e estratégia (interno).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thermal degradation of PLA is a complex process since it comprises many simultaneous reactions. The use of analytical techniques, such as differential scanning calorimetry (DSC) and thermogravimetry (TGA), yields useful information but a more sensitive analytical technique would be necessary to identify and quantify the PLA degradation products. In this work the thermal degradation of PLA at high temperatures was studied by using a pyrolyzer coupled to a gas chromatograph with mass spectrometry detection (Py-GC/MS). Pyrolysis conditions (temperature and time) were optimized in order to obtain an adequate chromatographic separation of the compounds formed during heating. The best resolution of chromatographic peaks was obtained by pyrolyzing the material from room temperature to 600 °C during 0.5 s. These conditions allowed identifying and quantifying the major compounds produced during the PLA thermal degradation in inert atmosphere. The strategy followed to select these operation parameters was by using sequential pyrolysis based on the adaptation of mathematical models. By application of this strategy it was demonstrated that PLA is degraded at high temperatures by following a non-linear behaviour. The application of logistic and Boltzmann models leads to good fittings to the experimental results, despite the Boltzmann model provided the best approach to calculate the time at which 50% of PLA was degraded. In conclusion, the Boltzmann method can be applied as a tool for simulating the PLA thermal degradation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe methods for estimating the parameters of Markovian population processes in continuous time, thus increasing their utility in modelling real biological systems. A general approach, applicable to any finite-state continuous-time Markovian model, is presented, and this is specialised to a computationally more efficient method applicable to a class of models called density-dependent Markov population processes. We illustrate the versatility of both approaches by estimating the parameters of the stochastic SIS logistic model from simulated data. This model is also fitted to data from a population of Bay checkerspot butterfly (Euphydryas editha bayensis), allowing us to assess the viability of this population. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Foundations support constitute one of the types of legal entities of private law forged with the purpose of supporting research projects, education and extension and institutional, scientific and technological development of Brazil. Observed as links of the relationship between company, university, and government, foundations supporting emerge in the Brazilian scene from the principle to establish an economic platform of development based on three pillars: science, technology and innovation – ST&I. In applied terms, these ones operate as tools of debureaucratisation making the management between public entities more agile, especially in the academic management in accordance with the approach of Triple Helix. From the exposed, the present study has as purpose understanding how the relation of Triple Helix intervenes in the fund-raising process of Brazilian foundations support. To understand the relations submitted, it was used the interaction models University-Company-Government recommended by Sábato and Botana (1968), the approach of the Triple Helix proposed by Etzkowitz and Leydesdorff (2000), as well as the perspective of the national innovation systems discussed by Freeman (1987, 1995), Nelson (1990, 1993) and Lundvall (1992). The research object of this study consists of 26 state foundations that support research associated with the National Council of the State Foundations of Supporting Research - CONFAP, as well as the 102 foundations in support of IES associated with the National Council of Foundations of Support for Institutions of Higher Education and Scientific and Technological Research – CONFIES, totaling 128 entities. As a research strategy, this study is considered as an applied research with a quantitative approach. Primary research data were collected using the e-mail Survey procedure. Seventy-five observations were collected, which corresponds to 58.59% of the research universe. It is considering the use of the bootstrap method in order to validate the use of the sample in the analysis of results. For data analysis, it was used descriptive statistics and multivariate data analysis techniques: the cluster analysis; the canonical correlation and the binary logistic regression. From the obtained canonical roots, the results indicated that the dependency relationship between the variables of relations (with the actors of the Triple Helix) and the financial resources invested in innovation projects is low, assuming the null hypothesis of this study, that the relations of the Triple Helix do not have interfered positively or negatively in raising funds for investments in innovation projects. On the other hand, the results obtained with the cluster analysis indicate that entities which have greater quantitative and financial amounts of projects are mostly large foundations (over 100 employees), which support up to five IES, publish management reports and use in their capital structure, greater financing of the public department. Finally, it is pertinent to note that the power of the classification of the logistic model obtained in this study showed high predictive capacity (80.0%) providing to the academic community replication in environments of similar analysis.