892 resultados para O41 - One, Two, and Multisector Growth Models
Resumo:
The problem of social diffusion has animated sociological thinking on topics ranging from the spread of an idea, an innovation or a disease, to the foundations of collective behavior and political polarization. While network diffusion has been a productive metaphor, the reality of diffusion processes is often muddier. Ideas and innovations diffuse differently from diseases, but, with a few exceptions, the diffusion of ideas and innovations has been modeled under the same assumptions as the diffusion of disease. In this dissertation, I develop two new diffusion models for "socially meaningful" contagions that address two of the most significant problems with current diffusion models: (1) that contagions can only spread along observed ties, and (2) that contagions do not change as they spread between people. I augment insights from these statistical and simulation models with an analysis of an empirical case of diffusion - the use of enterprise collaboration software in a large technology company. I focus the empirical study on when people abandon innovations, a crucial, and understudied aspect of the diffusion of innovations. Using timestamped posts, I analyze when people abandon software to a high degree of detail.
To address the first problem, I suggest a latent space diffusion model. Rather than treating ties as stable conduits for information, the latent space diffusion model treats ties as random draws from an underlying social space, and simulates diffusion over the social space. Theoretically, the social space model integrates both actor ties and attributes simultaneously in a single social plane, while incorporating schemas into diffusion processes gives an explicit form to the reciprocal influences that cognition and social environment have on each other. Practically, the latent space diffusion model produces statistically consistent diffusion estimates where using the network alone does not, and the diffusion with schemas model shows that introducing some cognitive processing into diffusion processes changes the rate and ultimate distribution of the spreading information. To address the second problem, I suggest a diffusion model with schemas. Rather than treating information as though it is spread without changes, the schema diffusion model allows people to modify information they receive to fit an underlying mental model of the information before they pass the information to others. Combining the latent space models with a schema notion for actors improves our models for social diffusion both theoretically and practically.
The empirical case study focuses on how the changing value of an innovation, introduced by the innovations' network externalities, influences when people abandon the innovation. In it, I find that people are least likely to abandon an innovation when other people in their neighborhood currently use the software as well. The effect is particularly pronounced for supervisors' current use and number of supervisory team members who currently use the software. This case study not only points to an important process in the diffusion of innovation, but also suggests a new approach -- computerized collaboration systems -- to collecting and analyzing data on organizational processes.
Resumo:
Parkinson’s disease (PD) is a progressive neurodegenerative disease characterised by motor and non-motor symptoms, resulting from the degeneration of nigrostriatal dopaminergic neurons and peripheral autonomic neurons. Given the limited success of neurotrophic factors in clinical trials, there is a need to identify new small molecule drugs and drug targets to develop novel therapeutic strategies to protect all neurons that degenerate in PD. Epigenetic dysregulation has been implicated in neurodegenerative disorders, while targeting histone acetylation is a promising therapeutic avenue for PD. We and others have demonstrated that histone deacetylase inhibitors have neurotrophic effects in experimental models of PD. Activators of histone acetyltransferases (HAT) provide an alternative approach for the selective activation of gene expression, however little is known about the potential of HAT activators as drug therapies for PD. To explore this potential, the present study investigated the neurotrophic effects of CTPB (N-(4-chloro-3-trifluoromethyl-phenyl)-2-ethoxy-6-pentadecyl-benzamide), which is a potent small molecule activator of the histone acetyltransferase p300/CBP, in the SH-SY5Y neuronal cell line. We report that CTPB promoted the survival and neurite growth of the SH-SY5Y cells, and also protected these cells from cell death induced by the neurotoxin 6-hydroxydopamine. This study is the first to investigate the phenotypic effects of the HAT activator CTPB, and to demonstrate that p300/CBP HAT activation has neurotrophic effects in a cellular model of PD.
Resumo:
Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: 1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (ELUMO) via QSAR modelling and analysis; 2) to validate the models by using internal and external cross-validation techniques; 3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: 1) Linear or Multi-linear Regression (MLR); 2) Partial Least Squares (PLS); and 3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: 1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; 2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; 3) ELUMO are shown to correlate highly with the NCl for several classes of DBPs; and 4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.
Resumo:
Accurate age models are a tool of utmost important in paleoclimatology. Constraining the rate and pace of past climate change are at the core of paleoclimate research, as such knowledge is crucial to our understanding of the climate system. Indeed, it allows for the disentanglement of the various drivers of climate change. The scarcity of highly resolved sedimentary records from the middle Eocene (Bartonian - Lutetian Stages; 47.8 - 37.8 Ma) has led to the existence of the "Eocene astronomical time scale gap" and hindered the establishment of a comprehensive astronomical time scale (ATS) for the entire Cenozoic. Sediments from the Newfoundland Ridge drilled during Integrated Ocean Drilling Program (IODP) Expedition 342 span the Eocene gap at an unprecedented stratigraphic resolution with carbonate bearing sediments. Moreover, these sediments exhibit cyclic lithological changes that allow for an astronomical calibration of geologic time. In this study, we use the dominant obliquity imprint in XRF-derived calcium-iron ratio series (Ca/Fe) from three sites drilled during IODP Expedition 342 (U1408, U1409, U1410) to construct a floating astrochronology. We then anchor this chronology to numerical geological time by tuning 173-kyr cycles in the amplitude modulation pattern of obliquity to an astronomical solution. This study is one of the first to use the 173-kyr obliquity amplitude cycle for astrochronologic purposes, as previous studies primarily use the 405-kyr long eccentricity cycle as a tuning target to calibrate the Paleogene geologic time scale. We demonstrate that the 173-kyr cycles in obliquity's amplitude are stable between 40 and 50 Ma, which means that one can use the 173-kyr cycle for astrochronologic calibration in the Eocene. Our tuning provides new age estimates for magnetochron reversals C18n.1n - C21r and a stratigraphic framework for key sites from Expedition 342 for the Eocene. Some disagreements emerge when we compare our tuning for the interval between C19r and C20r with previous tuning attempts from the South Atlantic. We therefore present a revision of the original astronomical interpretations for the latter records, so that the various astrochronologic age models for the middle Eocene in the North- and South-Atlantic are consistent.
Resumo:
To effectively assess and mitigate risk of permafrost disturbance, disturbance-p rone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape charac- teristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Pen- insula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed lo- cations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) N 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Addition- ally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results in- dicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of dis- turbances were similar regardless of the location. Disturbances commonly occurred on slopes between 4 and 15°, below Holocene marine limit, and in areas with low potential incoming solar radiation
Resumo:
The study examines the short-run and long-run causality running from real economic growth to real foreign direct investment inflows (RFDI). Other variables such as education (involving combination of primary, secondary and tertiary enrolment as a proxy to education), real development finance, unskilled labour, to real RFDI inflows are included in the study. The time series data covering the period of 1983 -2013 are examined. First, I applied Augmented Dicky-Fuller (ADF) technique to test for unit root in variables. Findings shows all variables integrated of order one [I(1)]. Thereafter, Johansen Co-integration Test (JCT) was conducted to establish the relationship among variables. Both trace and maximum Eigen value at 5% level of significance indicate 3 co-integrated equations. Vector error correction method (VECM) was applied to capture short and long-run causality running from education, economic growth, real development finance, and unskilled labour to real foreign direct investment inflows in the Republic of Rwanda. Findings shows no short-run causality running from education, real development finance, real GDP and unskilled labour to real FDI inflows, however there were existence of long-run causality. This can be interpreted that, in the short-run; education, development finance, finance and economic growth does not influence inflows of foreign direct investment in Rwanda; but it does in long-run. From the policy perspective, the Republic of Rwanda should focus more on long term goal of investing in education to improve human capital, undertake policy reforms that promotes economic growth, in addition to promoting good governance to attract development finance – especially from Nordics countries (particularly Norway and Denmark).
Resumo:
This paper describes an implementation of a method capable of integrating parametric, feature based, CAD models based on commercial software (CATIA) with the SU2 software framework. To exploit the adjoint based methods for aerodynamic optimisation within the SU2, a formulation to obtain geometric sensitivities directly from the commercial CAD parameterisation is introduced, enabling the calculation of gradients with respect to CAD based design variables. To assess the accuracy and efficiency of the alternative approach, two aerodynamic optimisation problems are investigated: an inviscid, 3D, problem with multiple constraints, and a 2D high-lift aerofoil, viscous problem without any constraints. Initial results show the new parameterisation obtaining reliable optimums, with similar levels of performance of the software native parameterisations. In the final paper, details of computing CAD sensitivities will be provided, including accuracy as well as linking geometric sensitivities to aerodynamic objective functions and constraints; the impact in the robustness of the overall method will be assessed and alternative parameterisations will be included.
Resumo:
Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization–maximization (MM) algorithm with a Nelder–Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.
Resumo:
Plant performance is significantly influenced by prevailing light and temperature conditions during plant growth and development. For plants exposed to natural fluctuations in abiotic environmental conditions it is however laborious and cumbersome to experimentally assign any contribution of individual environmental factors to plant responses. This study aimed at analyzing the interplay between light, temperature and internode growth based on model approaches. We extended the light-sensitive virtual plant model L-Cucumber by implementing a common Arrhenius function for appearance rates, growth rates, and growth durations. For two greenhouse experiments, the temperature-sensitive model approach resulted in a precise prediction of cucumber mean internode lengths and number of internodes, as well as in accurately predicted patterns of individual internode lengths along the main stem. In addition, a system's analysis revealed that environmental data averaged over the experimental period were not necessarily related to internode performance. Finally, the need for a species-specific parameterization of the temperature response function and related aspects in modeling temperature effects on plant development and growth is discussed.
Resumo:
Spinal cord injury (SCI) is a devastating condition, which results from trauma to the cord, resulting in a primary injury response which leads to a secondary injury cascade, causing damage to both glial and neuronal cells. Following trauma, the central nervous system (CNS) fails to regenerate due to a plethora of both intrinsic and extrinsic factors. Unfortunately, these events lead to loss of both motor and sensory function and lifelong disability and care for sufferers of SCI. There have been tremendous advancements made in our understanding of the mechanisms behind axonal regeneration and remyelination of the damaged cord. These have provided many promising therapeutic targets. However, very few have made it to clinical application, which could potentially be due to inadequate understanding of compound mechanism of action and reliance on poor SCI models. This thesis describes the use of an established neural cell co-culture model of SCI as a medium throughput screen for compounds with potential therapeutic properties. A number of compounds were screened which resulted in a family of compounds, modified heparins, being taken forward for more intense investigation. Modified heparins (mHeps) are made up of the core heparin disaccharide unit with variable sulphation groups on the iduronic acid and glucosamine residues; 2-O-sulphate (C2), 6-O-sulphate (C6) and N-sulphate (N). 2-O-sulphated (mHep6) and N-sulphated (mHep7) heparin isomers were shown to promote both neurite outgrowth and myelination in the SCI model. It was found that both mHeps decreased oligodendrocyte precursor cell (OPC) proliferation and increased oligodendrocyte (OL) number adjacent to the lesion. However, there is a difference in the direct effects on the OL from each of the mHeps; mHep6 increased myelin internode length and mHep7 increased the overall cell size. It was further elucidated that these isoforms interact with and mediate both Wnt and FGF signalling. In OPC monoculture experiments FGF2 treated OPCs displayed increased proliferation but this effect was removed when co-treated with the mHeps. Therefore, suggesting that the mHeps interact with the ligand and inhibit FGF2 signalling. Additionally, it was shown that both mHeps could be partially mediating their effects through the Wnt pathway. mHep effects on both myelination and neurite outgrowth were removed when co-treated with a Wnt signalling inhibitor, suggesting cell signalling mediation by ligand immobilisation and signalling activation as a mechanistic action for the mHeps. However, the initial methods employed in this thesis were not sufficient to provide a more detailed study into the effects the mHeps have on neurite outgrowth. This led to the design and development of a novel microfluidic device (MFD), which provides a platform to study of axonal injury. This novel device is a three chamber device with two chambers converging onto a central open access chamber. This design allows axons from two points of origin to enter a chamber which can be subjected to injury, thus providing a platform in which targeted axonal injury and the regenerative capacity of a compound study can be performed. In conclusion, this thesis contributes to and advances the study of SCI in two ways; 1) identification and investigation of a novel set of compounds with potential therapeutic potential i.e. desulphated modified heparins. These compounds have multiple therapeutic properties and could revolutionise both the understanding of the basic pathological mechanisms underlying SCI but also be a powered therapeutic option. 2) Development of a novel microfluidic device to study in greater detail axonal biology, specifically, targeted axonal injury and treatment, providing a more representative model of SCI than standard in vitro models. Therefore, the MFD could lead to advancements and the identification of factors and compounds relating to axonal regeneration.
Resumo:
In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.
Resumo:
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.
Resumo:
2016
Resumo:
Suppose two or more variables are jointly normally distributed. If there is a common relationship between these variables it would be very important to quantify this relationship by a parameter called the correlation coefficient which measures its strength, and the use of it can develop an equation for predicting, and ultimately draw testable conclusion about the parent population. This research focused on the correlation coefficient ρ for the bivariate and trivariate normal distribution when equal variances and equal covariances are considered. Particularly, we derived the maximum Likelihood Estimators (MLE) of the distribution parameters assuming all of them are unknown, and we studied the properties and asymptotic distribution of . Showing this asymptotic normality, we were able to construct confidence intervals of the correlation coefficient ρ and test hypothesis about ρ. With a series of simulations, the performance of our new estimators were studied and were compared with those estimators that already exist in the literature. The results indicated that the MLE has a better or similar performance than the others.
Resumo:
A partir de la dinámica evolutiva de la economía de las Tecnologías de la Información y las Comunicaciones y el establecimiento de estándares mínimos de velocidad en distintos contextos regulatorios a nivel mundial, en particular en Colombia, en el presente artículo se presentan diversas aproximaciones empíricas para evaluar los efectos reales que conlleva el establecimiento de definiciones de servicios de banda ancha en el mercado de Internet fijo. Con base en los datos disponibles para Colombia sobre los planes de servicios de Internet fijo ofrecidos durante el periodo 2006-2012, se estima para los segmentos residencial y corporativo el proceso de difusión logístico modificado y el modelo de interacción estratégica para identificar los impactos generados sobre la masificación del servicio a nivel municipal y sobre las decisiones estratégicas que adoptan los operadores, respectivamente. Respecto a los resultados, se encuentra, por una parte, que las dos medidas regulatorias establecidas en Colombia en 2008 y 2010 presentan efectos significativos y positivos sobre el desplazamiento y el crecimiento de los procesos de difusión a nivel municipal. Por otra parte, se observa sustituibilidad estratégica en las decisiones de oferta de velocidad de descarga por parte de los operadores corporativos mientras que, a partir del análisis de distanciamiento de la velocidad ofrecida respecto al estándar mínimo de banda ancha, se demuestra que los proveedores de servicios residenciales tienden a agrupar sus decisiones de velocidad alrededor de los niveles establecidos por regulación.