953 resultados para Models and Principles


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Amphetamine derivatives such as methamphetamine (METH) and 3,4-methylenedioxymethamphetamine (MDMA, ecstasy) are drugs widely abused in a recreational context. This has led to concern because of the evidence that they are neurotoxic in animal models and cognitive impairments have been described in heavy abusers. The main targets of these drugs are plasmalemmal and vesicular monoamine transporters, leading to reverse transport and increased monoamine efflux to the synapse. As far as neurotoxicity is concerned, increased reactive oxygen species (ROS) production seems to be one of the main causes. Recent research has demonstrated that blockade of 7 nicotinic acetylcholine receptors (nAChR) inhibits METH- and MDMA-induced ROS production in striatal synaptosomes which is dependent on calcium and on NO-synthase activation. Moreover, 7 nAChR antagonists (methyllycaconitine and memantine) attenuated in vivo the neurotoxicity induced by METH and MDMA, and memantine prevented the cognitive impairment induced by these drugs. Radioligand binding experiments demonstrated that both drugs have affinity to 7 and heteromeric nAChR, with MDMA showing lower Ki values, while fluorescence calcium experiments indicated that MDMA behaves as a partial agonist on 7 and as an antagonist on heteromeric nAChR. Sustained Ca increase led to calpain and caspase-3 activation. In addition, modulatory effects of MDMA on 7 and heteromeric nAChR populations have been found.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is estimated that around 230 people die each year due to radon (222Rn) exposure in Switzerland. 222Rn occurs mainly in closed environments like buildings and originates primarily from the subjacent ground. Therefore it depends strongly on geology and shows substantial regional variations. Correct identification of these regional variations would lead to substantial reduction of 222Rn exposure of the population based on appropriate construction of new and mitigation of already existing buildings. Prediction of indoor 222Rn concentrations (IRC) and identification of 222Rn prone areas is however difficult since IRC depend on a variety of different variables like building characteristics, meteorology, geology and anthropogenic factors. The present work aims at the development of predictive models and the understanding of IRC in Switzerland, taking into account a maximum of information in order to minimize the prediction uncertainty. The predictive maps will be used as a decision-support tool for 222Rn risk management. The construction of these models is based on different data-driven statistical methods, in combination with geographical information systems (GIS). In a first phase we performed univariate analysis of IRC for different variables, namely the detector type, building category, foundation, year of construction, the average outdoor temperature during measurement, altitude and lithology. All variables showed significant associations to IRC. Buildings constructed after 1900 showed significantly lower IRC compared to earlier constructions. We observed a further drop of IRC after 1970. In addition to that, we found an association of IRC with altitude. With regard to lithology, we observed the lowest IRC in sedimentary rocks (excluding carbonates) and sediments and the highest IRC in the Jura carbonates and igneous rock. The IRC data was systematically analyzed for potential bias due to spatially unbalanced sampling of measurements. In order to facilitate the modeling and the interpretation of the influence of geology on IRC, we developed an algorithm based on k-medoids clustering which permits to define coherent geological classes in terms of IRC. We performed a soil gas 222Rn concentration (SRC) measurement campaign in order to determine the predictive power of SRC with respect to IRC. We found that the use of SRC is limited for IRC prediction. The second part of the project was dedicated to predictive mapping of IRC using models which take into account the multidimensionality of the process of 222Rn entry into buildings. We used kernel regression and ensemble regression tree for this purpose. We could explain up to 33% of the variance of the log transformed IRC all over Switzerland. This is a good performance compared to former attempts of IRC modeling in Switzerland. As predictor variables we considered geographical coordinates, altitude, outdoor temperature, building type, foundation, year of construction and detector type. Ensemble regression trees like random forests allow to determine the role of each IRC predictor in a multidimensional setting. We found spatial information like geology, altitude and coordinates to have stronger influences on IRC than building related variables like foundation type, building type and year of construction. Based on kernel estimation we developed an approach to determine the local probability of IRC to exceed 300 Bq/m3. In addition to that we developed a confidence index in order to provide an estimate of uncertainty of the map. All methods allow an easy creation of tailor-made maps for different building characteristics. Our work is an essential step towards a 222Rn risk assessment which accounts at the same time for different architectural situations as well as geological and geographical conditions. For the communication of 222Rn hazard to the population we recommend to make use of the probability map based on kernel estimation. The communication of 222Rn hazard could for example be implemented via a web interface where the users specify the characteristics and coordinates of their home in order to obtain the probability to be above a given IRC with a corresponding index of confidence. Taking into account the health effects of 222Rn, our results have the potential to substantially improve the estimation of the effective dose from 222Rn delivered to the Swiss population.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Paleoclimatic reconstructions coupled with species distribution models and identification of extant spatial genetic structure have the potential to provide insights into the demographic events that shape the distribution of intra-specific genetic variation across time. Using the globeflower Trollius europaeus as a case-study, we combined (1) Amplified Fragment Length Polymorphisms, (2) suites of 1000-years stepwise hindcasted species distributions and (3) a model of diffusion through time over the last 24,000 years, to trace the spatial dynamics that most likely fits the species' current genetic structure. We show that the globeflower comprises four gene pools in Europe which, from the dry period preceding the Last Glacial Maximum, dispersed while tracking the conditions fitting its climatic niche. Among these four gene pools, two are predicted to experience drastic range retraction in the near future. Our interdisciplinary approach, applicable to virtually any taxon, is an advance in inferring how climate change impacts species' genetic structures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim Structure of the Thesis In the first article, I focus on the context in which the Homo Economicus was constructed - i.e., the conception of economic actors as fully rational, informed, egocentric, and profit-maximizing. I argue that the Homo Economicus theory was developed in a specific societal context with specific (partly tacit) values and norms. These norms have implicitly influenced the behavior of economic actors and have framed the interpretation of the Homo Economicus. Different factors however have weakened this implicit influence of the broader societal values and norms on economic actors. The result is an unbridled interpretation and application of the values and norms of the Homo Economicus in the business environment, and perhaps also in the broader society. In the second article, I show that the morality of many economic actors relies on isomorphism, i.e., the attempt to fit into the group by adopting the moral norms surrounding them. In consequence, if the norms prevailing in a specific group or context (such as a specific region or a specific industry) change, it can be expected that actors with an 'isomorphism morality' will also adapt their ethical thinking and their behavior -for the 'better' or for the 'worse'. The article further describes the process through which corporations could emancipate from the ethical norms prevailing in the broader society, and therefore develop an institution with specific norms and values. These norms mainly rely on mainstream business theories praising the economic actor's self-interest and neglecting moral reasoning. Moreover, because of isomorphism morality, many economic actors have changed their perception of ethics, and have abandoned the values prevailing in the broader society in order to adopt those of the economic theory. Finally, isomorphism morality also implies that these economic actors will change their morality again if the institutional context changes. The third article highlights the role and responsibility of business scholars in promoting a systematic reflection and self-critique of the business system and develops alternative models to fill the moral void of the business institution and its inherent legitimacy crisis. Indeed, the current business institution relies on assumptions such as scientific neutrality and specialization, which seem at least partly challenged by two factors. First, self-fulfilling prophecy provides scholars with an important (even if sometimes undesired) normative influence over practical life. Second, the increasing complexity of today's (socio-political) world and interactions between the different elements constituting our society question the strong specialization of science. For instance, economic theories are not unrelated to psychology or sociology, and economic actors influence socio-political structures and processes, e.g., through lobbying (Dobbs, 2006; Rondinelli, 2002), or through marketing which changes not only the way we consume, but more generally tries to instill a specific lifestyle (Cova, 2004; M. K. Hogg & Michell, 1996; McCracken, 1988; Muniz & O'Guinn, 2001). In consequence, business scholars are key actors in shaping both tomorrow's economic world and its broader context. A greater awareness of this influence might be a first step toward an increased feeling of civic responsibility and accountability for the models and theories developed or taught in business schools.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the aim of improving human health, scientists have been using an approach referred to as translational research, in which they aim to convey their laboratory discoveries into clinical applications to help prevent and cure disease. Such discoveries often arise from cellular, molecular, and physiological studies that progress to the clinical level. Most of the translational work is done using animal models that share common genes, molecular pathways, or phenotypes with humans. In this article, we discuss how translational work is carried out in various animal models and illustrate its relevance for human sleep research and sleep-related disorders.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Prognostic models and nomograms were recently developed to predict survival of patients with newly diagnosed glioblastoma multiforme (GBM).1 To improve predictions, models should be updated with the most recent patient and disease information. Nomograms predicting patient outcome at the time of disease progression are required. METHODS: Baseline information from 299 patients with recurrent GBM recruited in 8 phase I or II trials of the EORTC Brain Tumor Group was used to evaluate clinical parameters as prognosticators of patient outcome. Univariate (log rank) and multivariate (Cox models) analyses were made to assess the ability of patients' characteristics (age, sex, performance status [WHO PS], and MRC neurological deficit scale), disease history (prior treatments, time since last treatment or initial diagnosis, and administration of steroids or antiepileptics) and disease characteristics (tumor size and number of lesions) to predict progression free survival (PFS) and overall survival (OS). Bootstrap technique was used for models internal validation. Nomograms were computed to provide individual patients predictions. RESULTS: Poor PS and more than 1 lesion had a significant prognostic impact for both PFS and OS. Antiepileptic drug use was significantly associated with worse PFS. Larger tumors (split by the median of the largest tumor diameter >42.5 mm) and steroid use had shorter OS. Age, sex, neurologic deficit, prior therapies, and time since last therapy or initial diagnosis did not show independent prognostic value for PFS or OS. CONCLUSIONS: This analysis confirms that PS but not age is a major prognostic factor for PFS and OS. Multiple or large tumors and the need to administer steroids significantly increase the risk of progression and death. Nomograms at the recurrence could be used to obtain accurate predictions for the design of new targeted therapy trials or retrospective analyses. (1. T. Gorlia et al., Nomograms for predicting survival of patients with newly diagnosed glioblastoma. Lancet Oncol 9 (1): 29-38, 2008.)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although sources in general nonlinear mixturm arc not separable iising only statistical independence, a special and realistic case of nonlinear mixtnres, the post nonlinear (PNL) mixture is separable choosing a suited separating system. Then, a natural approach is based on the estimation of tho separating Bystem parameters by minimizing an indcpendence criterion, like estimated mwce mutual information. This class of methods requires higher (than 2) order statistics, and cannot separate Gaarsian sources. However, use of [weak) prior, like source temporal correlation or nonstationarity, leads to other source separation Jgw rithms, which are able to separate Gaussian sourra, and can even, for a few of them, works with second-order statistics. Recently, modeling time correlated s011rces by Markov models, we propose vcry efficient algorithms hmed on minimization of the conditional mutual information. Currently, using the prior of temporally correlated sources, we investigate the fesihility of inverting PNL mixtures with non-bijectiw non-liacarities, like quadratic functions. In this paper, we review the main ICA and BSS results for riunlinear mixtures, present PNL models and algorithms, and finish with advanced resutts using temporally correlated snu~sm

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this study was to adapt a nonlinear model (Wang and Engel - WE) for simulating the phenology of maize (Zea mays L.), and to evaluate this model and a linear one (thermal time), in order to predict developmental stages of a field-grown maize variety. A field experiment, during 2005/2006 and 2006/2007 was conducted in Santa Maria, RS, Brazil, in two growing seasons, with seven sowing dates each. Dates of emergence, silking, and physiological maturity of the maize variety BRS Missões were recorded in six replications in each sowing date. Data collected in 2005/2006 growing season were used to estimate the coefficients of the two models, and data collected in the 2006/2007 growing season were used as independent data set for model evaluations. The nonlinear WE model accurately predicted the date of silking and physiological maturity, and had a lower root mean square error (RMSE) than the linear (thermal time) model. The overall RMSE for silking and physiological maturity was 2.7 and 4.8 days with WE model, and 5.6 and 8.3 days with thermal time model, respectively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES: To test the activity of tigecycline combined with 16 antimicrobials in vitro against 22 gram-positive and 55 gram-negative clinical isolates. METHODS: Antibiotic interactions were determined by chequerboard and time-kill methods. RESULTS: By chequerboard, of 891 organism-drug interactions tested, 97 (11%) were synergistic, 793 (89%) were indifferent and 1 (0.1%) was antagonistic. Among gram-positive pathogens, most synergisms occurred against Enterococcus spp. (7/11 isolates) with the tigecycline/rifampicin combination. No antagonism was detected. Among gram-negative organisms, synergism was observed mainly with trimethoprim/sulfamethoxazole against Serratia marcescens (5/5 isolates), Proteus spp. (2/5) and Stenotrophomonas maltophilia (2/5), with aztreonam against S. maltophilia (3/5), with cefepime and imipenem against Enterobacter cloacae (3/5), with ceftazidime against Morganella morganii (3/5), and with ceftriaxone against Klebsiella pneumoniae (3/5). The only case of antagonism occurred against one S. marcescens with the tigecycline/imipenem combination. Selected time-kill assays confirmed the bacteriostatic interactions observed by the chequerboard method. Moreover, they revealed a bactericidal synergism of tigecycline with piperacillin/tazobactam against one penicillin-resistant Streptococcus pneumoniae and with amikacin against Proteus vulgaris. CONCLUSIONS: Combinations of tigecycline with other antimicrobials produce primarily an indifferent response. Specific synergisms, especially against enterococci and problematic gram-negative isolates, might be worth investigating in in vitro models and/or in animal models simulating the human environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence-absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size (n < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We have constructed a forward modelling code in Matlab, capable of handling several commonly used electrical and electromagnetic methods in a 1D environment. We review the implemented electromagnetic field equations for grounded wires, frequency and transient soundings and present new solutions in the case of a non-magnetic first layer. The CR1Dmod code evaluates the Hankel transforms occurring in the field equations using either the Fast Hankel Transform based on digital filter theory, or a numerical integration scheme applied between the zeros of the Bessel function. A graphical user interface allows easy construction of 1D models and control of the parameters. Modelling results are in agreement with other authors, but the time of computation is less efficient than other available codes. Nevertheless, the CR1Dmod routine handles complex resistivities and offers solutions based on the full EM-equations as well as the quasi-static approximation. Thus, modelling of effects based on changes in the magnetic permeability and the permittivity is also possible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The spatial resolution visualized with hydrological models and the conceptualized images of subsurface hydrological processes often exceed resolution of the data collected with classical instrumentation at the field scale. In recent years it was possible to increasingly diminish the inherent gap to information from point like field data through the application of hydrogeophysical methods at field-scale. With regards to all common geophysical exploration techniques, electric and electromagnetic methods have arguably to greatest sensitivity to hydrologically relevant parameters. Of particular interest in this context are induced polarisation (IP) measurements, which essentially constrain the capacity of a probed subsurface region to store an electrical charge. In the absence of metallic conductors the IP- response is largely driven by current conduction along the grain surfaces. This offers the perspective to link such measurements to the characteristics of the solid-fluid-interface and thus, at least in unconsolidated sediments, should allow for first-order estimates of the permeability structure.¦While the IP-effect is well explored through laboratory experiments and in part verified through field data for clay-rich environments, the applicability of IP-based characterizations to clay-poor aquifers is not clear. For example, polarization mechanisms like membrane polarization are not applicable in the rather wide pore-systems of clay free sands, and the direct transposition of Schwarz' theory relating polarization of spheres to the relaxation mechanism of polarized cells to complex natural sediments yields ambiguous results.¦In order to improve our understanding of the structural origins of IP-signals in such environments as well as their correlation with pertinent hydrological parameters, various laboratory measurements have been conducted. We consider saturated quartz samples with a grain size spectrum varying from fine sand to fine gravel, that is grain diameters between 0,09 and 5,6 mm, as well as corresponding pertinent mixtures which can be regarded as proxies for widespread alluvial deposits. The pore space characteristics are altered by changing (i) the grain size spectra, (ii) the degree of compaction, and (iii) the level of sorting. We then examined how these changes affect the SIP response, the hydraulic conductivity, and the specific surface area of the considered samples, while keeping any electrochemical variability during the measurements as small as possible. The results do not follow simple assumptions on relationships to single parameters such as grain size. It was found that the complexity of natural occurring media is not yet sufficiently represented when modelling IP. At the same time simple correlation to permeability was found to be strong and consistent. Hence, adaptations with the aim of better representing the geo-structure of natural porous media were applied to the simplified model space used in Schwarz' IP-effect-theory. The resulting semi- empiric relationship was found to more accurately predict the IP-effect and its relation to the parameters grain size and permeability. If combined with recent findings about the effect of pore fluid electrochemistry together with advanced complex resistivity tomography, these results will allow us to picture diverse aspects of the subsurface with relative certainty. Within the framework of single measurement campaigns, hydrologiste can than collect data with information about the geo-structure and geo-chemistry of the subsurface. However, additional research efforts will be necessary to further improve the understanding of the physical origins of IP-effect and minimize the potential for false interpretations.¦-¦Dans l'étude des processus et caractéristiques hydrologiques des subsurfaces, la résolution spatiale donnée par les modèles hydrologiques dépasse souvent la résolution des données du terrain récoltées avec des méthodes classiques d'hydrologie. Récemment il est possible de réduire de plus en plus cet divergence spatiale entre modèles numériques et données du terrain par l'utilisation de méthodes géophysiques, notamment celles géoélectriques. Parmi les méthodes électriques, la polarisation provoquée (PP) permet de représenter la capacité des roches poreuses et des sols à stocker une charge électrique. En l'absence des métaux dans le sous-sol, cet effet est largement influencé par des caractéristiques de surface des matériaux. En conséquence les mesures PP offrent une information des interfaces entre solides et fluides dans les matériaux poreux que nous pouvons lier à la perméabilité également dirigée par ces mêmes paramètres. L'effet de la polarisation provoquée à été étudié dans différentes études de laboratoire, ainsi que sur le terrain. A cause d'une faible capacité de polarisation des matériaux sableux, comparé aux argiles, leur caractérisation par l'effet-PP reste difficile a interpréter d'une manière cohérente pour les environnements hétérogènes.¦Pour améliorer les connaissances sur l'importance de la structure du sous-sol sableux envers l'effet PP et des paramètres hydrologiques, nous avons fait des mesures de laboratoire variées. En détail, nous avons considéré des échantillons sableux de quartz avec des distributions de taille de grain entre sables fins et graviers fins, en diamètre cela fait entre 0,09 et 5,6 mm. Les caractéristiques de l'espace poreux sont changées en modifiant (i) la distribution de taille des grains, (ii) le degré de compaction, et (iii) le niveau d'hétérogénéité dans la distribution de taille de grains. En suite nous étudions comment ces changements influencent l'effet-PP, la perméabilité et la surface spécifique des échantillons. Les paramètres électrochimiques sont gardés à un minimum pendant les mesures. Les résultats ne montrent pas de relation simple entre les paramètres pétro-physiques comme par exemples la taille des grains. La complexité des media naturels n'est pas encore suffisamment représenté par les modèles des processus PP. Néanmoins, la simple corrélation entre effet PP et perméabilité est fort et consistant. En conséquence la théorie de Schwarz sur l'effet-PP a été adapté de manière semi-empirique pour mieux pouvoir estimer la relation entre les résultats de l'effet-PP et les paramètres taille de graines et perméabilité. Nos résultats concernant l'influence de la texture des matériaux et celles de l'effet de l'électrochimie des fluides dans les pores, permettront de visualiser des divers aspects du sous-sol. Avec des telles mesures géo-électriques, les hydrologues peuvent collectionner des données contenant des informations sur la structure et la chimie des fluides des sous-sols. Néanmoins, plus de recherches sur les origines physiques de l'effet-PP sont nécessaires afin de minimiser le risque potentiel d'une mauvaise interprétation des données.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this work was to evaluate an estimation system for rice yield in Brazil, based on simple agrometeorological models and on the technological level of production systems. This estimation system incorporates the conceptual basis proposed by Doorenbos & Kassam for potential and attainable yields with empirical adjusts for maximum yield and crop sensitivity to water deficit, considering five categories of rice yield. Rice yield was estimated from 2000/2001 to 2007/2008, and compared to IBGE yield data. Regression analyses between model estimates and data from IBGE surveys resulted in significant coefficients of determination, with less dispersion in the South than in the North and Northeast regions of the country. Index of model efficiency (E1') ranged from 0.01 in the lower yield classes to 0.45 in higher ones, and mean absolute error ranged from 58 to 250 kg ha‑1, respectively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The suitable timing of capacity investments is a remarkable issue especially in capital intensive industries. Despite its importance, fairly few studies have been published on the topic. In the present study models for the timing of capacity change in capital intensive industry are developed. The study considers mainly the optimal timing of single capacity changes. The review of earlier research describes connections between cost, capacity and timing literature, and empirical examples are used to describe the starting point of the study and to test the developed models. The study includes four models, which describe the timing question from different perspectives. The first model, which minimizes unit costs, has been built for capacity expansion and replacement situations. It is shown that the optimal timing of an investment can be presented with the capacity and cost advantage ratios. After the unit cost minimization model the view is extended to the direction of profit maximization. The second model states that early investments are preferable if the change of fixed costs is small compared to the change of the contribution margin. The third model is a numerical discounted cash flow model, which emphasizes the roles of start-up time, capacity utilization rate and value of waiting as drivers of the profitable timing of a project. The last model expands the view from project level to company level and connects the flexibility of assets and cost structures to the timing problem. The main results of the research are the solutions of the models and analysis or simulations done with the models. The relevance and applicability of the results are verified by evaluating the logic of the models and by numerical cases.