948 resultados para Constraints-led approach
Resumo:
Purpose : To assess time trends of testicular cancer (TC) mortality in Spain for period 1985-2019 for age groups 15-74 years old through a Bayesian age-period-cohort (APC) analysis. Methods: A Bayesian age-drift model has been fitted to describe trends. Projections for 2005-2019 have been calculated by means of an autoregressive APC model. Prior precision for these parameters has been selected through evaluation of an adaptive precision parameter and 95% credible intervals (95% CRI) have been obtained for each model parameter. Results: A decrease of -2.41% (95% CRI: -3.65%; -1.13%) per year has been found for TC mortality rates in age groups 15-74 during 1985-2004, whereas mortality showed a lower annual decrease when data was restricted to age groups 15-54 (-1.18%; 95% CRI: -2.60%; -0.31%). During 2005-2019 is expected a decrease of TC mortality of 2.30% per year for men younger than 35, whereas a leveling off for TC mortality rates is expected for men older than 35. Conclusions: A Bayesian approach should be recommended to describe and project time trends for those diseases with low number of cases. Through this model it has been assessed that management of TC and advances in therapy led to decreasing trend of TC mortality during the period 1985-2004, whereas a leveling off for these trends can be considered during 2005-2019 among men older than 35.
Resumo:
An emerging therapeutic approach for Duchenne muscular dystrophy is the transplantation of autologous myogenic progenitor cells genetically modified to express dystrophin. The use of this approach is challenged by the difficulty in maintaining these cells ex vivo while keeping their myogenic potential, and ensuring sufficient transgene expression following their transplantation and myogenic differentiation in vivo. We investigated the use of the piggyBac transposon system to achieve stable gene expression when transferred to cultured mesoangioblasts and into murine muscles. Without selection, up to 8% of the mesoangioblasts expressed the transgene from 1 to 2 genomic copies of the piggyBac vector. Integration occurred mostly in intergenic genomic DNA and transgene expression was stable in vitro. Intramuscular transplantation of mouse Tibialis anterior muscles with mesoangioblasts containing the transposon led to sustained myofiber GFP expression in vivo. In contrast, the direct electroporation of the transposon-donor plasmids in the mouse Tibialis muscles in vivo did not lead to sustained transgene expression despite molecular evidence of piggyBac transposition in vivo. Together these findings provide a proof-of-principle that piggyBac transposon may be considered for mesoangioblast cell-based therapies of muscular dystrophies.
Resumo:
The production and use of false identity and travel documents in organized crime represent a serious and evolving threat. However, a case-by-case perspective, thus suffering from linkage blindness and a limited analysis capacity, essentially drives the present-day fight against this criminal problem. To assist in overcoming these limitations, a process model was developed using a forensic perspective. It guides the systematic analysis and management of seized false documents to generate forensic intelligence that supports strategic and tactical decision-making in an intelligence-led policing approach. The model is articulated on a three-level architecture that aims to assist in detecting and following-up on general trends, production methods and links between cases or series. Using analyses of a large dataset of counterfeit and forged identity and travel documents, it is possible to illustrate the model, its three levels and their contribution. Examples will point out how the proposed approach assists in detecting emerging trends, in evaluating the black market's degree of structure, in uncovering criminal networks, in monitoring the quality of false documents, and in identifying their weaknesses to orient the conception of more secured travel and identity documents. The process model proposed is thought to have a general application in forensic science and can readily be transposed to other fields of study.
Resumo:
Background: Understanding the relationship between gene expression changes, enzyme activity shifts, and the corresponding physiological adaptive response of organisms to environmental cues is crucial in explaining how cells cope with stress. For example, adaptation of yeast to heat shock involves a characteristic profile of changes to the expression levels of genes coding for enzymes of the glycolytic pathway and some of its branches. The experimental determination of changes in gene expression profiles provides a descriptive picture of the adaptive response to stress. However, it does not explain why a particular profile is selected for any given response. Results: We used mathematical models and analysis of in silico gene expression profiles (GEPs) to understand how changes in gene expression correlate to an efficient response of yeast cells to heat shock. An exhaustive set of GEPs, matched with the corresponding set of enzyme activities, was simulated and analyzed. The effectiveness of each profile in the response to heat shock was evaluated according to relevant physiological and functional criteria. The small subset of GEPs that lead to effective physiological responses after heat shock was identified as the result of the tuning of several evolutionary criteria. The experimentally observed transcriptional changes in response to heat shock belong to this set and can be explained by quantitative design principles at the physiological level that ultimately constrain changes in gene expression. Conclusion: Our theoretical approach suggests a method for understanding the combined effect of changes in the expression of multiple genes on the activity of metabolic pathways, and consequently on the adaptation of cellular metabolism to heat shock. This method identifies quantitative design principles that facilitate understating the response of the cell to stress.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
Regulation has in many cases been delegated to independent agencies, which has led to the question of how democratic accountability of these agencies is ensured. There are few empirical approaches to agency accountability. We offer such an approach, resting upon three propositions. First, we scrutinize agency accountability both de jure (accountability is ensured by formal rights of accountability 'fora' to receive information and impose consequences) and de facto (the capability of fora to use these rights depends on resources and decision costs that affect the credibility of their sanctioning capacity). Second, accountability must be evaluated separately at political, operational and managerial levels. And third, at each level accountability is enacted by a system of several (partially) interdependent fora, forming together an accountability regime. The proposed framework is applied to the case of the German Bundesnetzagentur's accountability regime, which shows its suitability for empirical purposes. Regulatory agencies are often considered as independent, yet accountable. This article provides a realistic framework for the study of accountability 'regimes' in which they are embedded. It emphasizes the need to identify the various actors (accountability fora) to which agencies are formally accountable (parliamentary committees, auditing bodies, courts, and so on) and to consider possible relationships between them. It argues that formal accountability 'on paper', as defined in official documents, does not fully account for de facto accountability, which depends on the resources possessed by the fora (mainly information-processing and decision-making capacities) and the credibility of their sanctioning capacities. The article applies this framework to the German Bundesnetzagentur.
Resumo:
Peer-reviewed
Resumo:
The Kenyan forestry and sawmilling industry have been subject to a changing environment since 1999 when the industrial forest plantations were closed down. This has lowered raw material supply and it has affected and reduced the sawmill operations and the viability of the sawmill enterprises. The capacity of the 276 registered sawmills is not sufficient to fulfill sawn timber demand in Kenya. This is because of the technological degradation and lack of a qualified labor force, which were caused because of non-existent sawmilling education and further training in Kenya. Lack of competent sawmill workers has led to low raw material recovery, under utilization of resources and loss of employment. The objective of the work was to suggest models, methods and approaches for the competence and capacity development of the Kenyan sawmilling industry, sawmills and their workers. A nationwide field survey, interviews, questionnaire and literature review was used for data collection to find out the sawmills’ competence development areas and to suggest models and methods for their capacity building. The sampling frame included 22 sawmills that represented 72,5% of all the registered sawmills in Kenya. The results confirmed that the sawmills’ technological level was backwards, productivity low, raw material recovery unacceptable and workers’ professional education low. The future challenges will be how to establish the sawmills’ capacity building and workers’ competence development. Sawmilling industry development requires various actions through new development models and approaches. Activities should be started for technological development and workers’ competence development. This requires re-starting of vocational training in sawmilling and the establishment of more effective co-operation between the sawmills and their stakeholder groups. In competence development the Enterprise Competence Management Model of Nurminen (2007) can be used, whereas the best training model and approach would be a practically oriented learning at work model in which the short courses, technical assistance and extension services would be the key functions.
Resumo:
In this article, the results of a modified SERVQUAL questionnaire (Parasuraman et al., 1991) are reported. The modifications consisted in substituting questionnaire items particularly suited to a specific service (banking) and context (county of Girona, Spain) for the original rather general and abstract items. These modifications led to more interpretable factors which accounted for a higher percentage of item variance. The data were submitted to various structural equation models which made it possible to conclude that the questionnaire contains items with a high measurement quality with respect to five identified dimensions of service quality which differ from those specified by Parasuraman et al. And are specific to the banking service. The two dimensions relating to the behaviour of employees have the greatest predictive power on overall quality and satisfaction ratings, which enables managers to use a low-cost reduced version of the questionnaire to monitor quality on a regular basis. It was also found that satisfaction and overall quality were perfectly correlated thus showing that customers do not perceive these concepts as being distinct
Resumo:
This thesis introduces a real-time simulation environment based on the multibody simulation approach. The environment consists of components that are used in conventional product development, including computer aided drawing, visualization, dynamic simulation and finite element software architecture, data transfer and haptics. These components are combined to perform as a coupled system on one platform. The environment is used to simulate mobile and industrial machines at different stages of a product life time. Consequently, the demands of the simulated scenarios vary. In this thesis, a real-time simulation environment based on the multibody approach is used to study a reel mechanism of a paper machine and a gantry crane. These case systems are used to demonstrate the usability of the real-time simulation environment for fault detection purposes and in the context of a training simulator. In order to describe the dynamical performance of a mobile or industrial machine, the nonlinear equations of motion must be defined. In this thesis, the dynamical behaviour of machines is modelled using the multibody simulation approach. A multibody system may consist of rigid and flexible bodies which are joined using kinematic joint constraints while force components are used to describe the actuators. The strength of multibody dynamics relies upon its ability to describe nonlinearities arising from wearing of the components, friction, large rotations or contact forces in a systematic manner. For this reason, the interfaces between subsystems such as mechanics, hydraulics and control systems of the mechatronic machine can be defined and analyzed in a straightforward manner.
Resumo:
The objective of this thesis work is to develop and study the Differential Evolution Algorithm for multi-objective optimization with constraints. Differential Evolution is an evolutionary algorithm that has gained in popularity because of its simplicity and good observed performance. Multi-objective evolutionary algorithms have become popular since they are able to produce a set of compromise solutions during the search process to approximate the Pareto-optimal front. The starting point for this thesis was an idea how Differential Evolution, with simple changes, could be extended for optimization with multiple constraints and objectives. This approach is implemented, experimentally studied, and further developed in the work. Development and study concentrates on the multi-objective optimization aspect. The main outcomes of the work are versions of a method called Generalized Differential Evolution. The versions aim to improve the performance of the method in multi-objective optimization. A diversity preservation technique that is effective and efficient compared to previous diversity preservation techniques is developed. The thesis also studies the influence of control parameters of Differential Evolution in multi-objective optimization. Proposals for initial control parameter value selection are given. Overall, the work contributes to the diversity preservation of solutions in multi-objective optimization.
Resumo:
Nutrient impoverishment in mesocosms was carried out in a shallow eutrophic reservoir aiming to evaluate the nutrient removal technique as a method for eutrophication reduction. Garças Pond is located in the Parque Estadual das Fontes do Ipiranga Biological Reserve situated in the southeast region of the municipality of São Paulo. Three different treatments were designed, each consisting of two enclosures containing 360 liters of water each. Mesocosms were made of polyethylene bags and PVC pipes, and were attached to the lake bottom. Treatment dilutions followed Carlson's trophic state index modified by Toledo and collaborators, constituting the oligotrophic, mesotrophic, and eutrophic treatments. Ten abiotic and 9 biological samplings were carried out simultaneously. Trophic states previously calculated for the treatments were kept unaltered during the entire experiment period, except for the mesotrophic mesocosms in which TP reached oligotrophic concentrations on the 31st day of the experiment. In all three treatments a reduction of DO was observed during the study period. At the same time, NH4+ and free CO2 rose, indicating decomposition within the enclosures. Nutrient impoverishment caused P limitation in all three treatments during most of the experiment period. Reduction of algal density, chlorophyll a, and phaeophytin was observed in all treatments. Competition for nutrients led to changes in phytoplankton composition. Once isolated and diluted, the mesocosms' trophic state did not change. This led to the conclusion that isolation of the allochthonous sources of nutrients is the first step for the recovery of the Garças Pond.
Resumo:
The aim of this study was to analyse mothers’ working time patterns across 22 European countries. The focus was on three questions: how much mothers prefer to work, how much they actually work, and to what degree their preferred and actual working times are (in)consistent with each other. The focus was on cross-national differences in mothers’ working time patterns, comparison of mothers’ working times to that of childless women and fathers, as well as on individual- and country-level factors that explain the variation between them. In the theoretical background, the departure point was an integrative theoretical approach where the assumption is that there are various kinds of explanations for the differences in mothers’ working time patterns – namely structural, cultural and institutional – , and that these factors are laid in two levels: individual- and country-levels. Data were extracted from the European Social Survey (ESS) 2010 / 2011. The results showed that mothers’ working time patterns, both preferred and actual working times, varied across European countries. Four clusters were formed to illustrate the differences. In the full-time pattern, full-time work was the most important form of work, leaving all other working time forms marginal. The full-time pattern was perceived in terms of preferred working times in Bulgaria and Portugal. In polarised pattern countries, fulltime work was also important, but it was accompanied by a large share of mothers not working at all. In the case of preferred working times, many Eastern and Southern European countries followed it whereas in terms of actual working times it included all Eastern and Southern European countries as well as Finland. The combination pattern was characterised by the importance of long part-time hours and full-time work. It was the preferred working time pattern in the Nordic countries, France, Slovenia, and Spain, but Belgium, Denmark, France, Norway, and Sweden followed it in terms of actual working times. The fourth cluster that described mothers’ working times was called the part-time pattern, and it was illustrated by the prevalence of short and long part-time work. In the case of preferred working times, it was followed in Belgium, Germany, Ireland, the Netherlands and Switzerland. Besides Belgium, the part-time pattern was followed in the same countries in terms of actual working times. The consistency between preferred and actual working times was rather strong in a majority of countries. However, six countries fell under different working time patterns when preferred and actual working times were compared. Comparison of working mothers’, childless women’s, and fathers’ working times showed that differences between these groups were surprisingly small. It was only in part-time pattern countries that working mothers worked significantly shorter hours than working childless women and fathers. Results therefore revealed that when mothers’ working times are under study, an important question regarding the population examined is whether it consists of all mothers or only working mothers. Results moreover supported the use of the integrative theoretical approach when studying mothers’ working time patterns. Results indicate that mothers’ working time patterns in all countries are shaped by various opportunities and constraints, which are comprised of structural, cultural, institutional, and individual-level factors.
Resumo:
Bottom of the pyramid (BoP) markets are an underserved market of approximately four billion people living on under $5 a day in four regional areas: Africa, Asia, Eastern Europe and Latin America. According to estimations, the BoP market forms a $5 trillion global consumer market. Despite the potential of BoP markets, companies have traditionally focused on serving the markets of developed countries and ignored the large customer group at the bottom of the pyramid. The BoP approach as first developed by Prahalad and Hart in 2002 has focused on multinational corporations (MNCs), which were thought of as the ones who should take responsibility in serving the customers at the bottom of the pyramid. This study challenges this proposition and gives evidence that also smaller international new ventures – entrepreneurial firms that are international from their birth, can be successful in BoP markets. BoP markets are characterized by a number of deficiencies in the institutional environment such as strong reliance on informal sector, lack of infrastructure and lack of skilled labor. The purpose of this study is to increase the understanding of international entrepreneurship in BoP markets by analyzing how international new ventures overcome institutional constraints in BoP markets and how institutional uncertainty can be exploited by solving institutional problems. The main objective is divided into four sub objectives. • To describe the opportunities and challenges BoP markets present • To analyze the internationalization of INVs to BoP markets • To examine what kinds of strategies international entrepreneurs use to overcome institutional constraints • To explore the opportunities institutional uncertainty offers for INVs Qualitative approach was used to conduct this study and multiple-case study was chosen as a research strategy in order to allow cross-case analysis. The empirical data was collected through four interviews with the companies Fuzu, Mifuko, Palmroth Consulting and Sibesonke. The results indicated that understanding of the wider institutional environment improves the survival prospects of INVs in BoP markets and that it is indeed possible to exploit institutional uncertainty by solving institutional problems. The main findings were that first-hand experience of the markets and grassroots levels of information are the best assets in internationalization to BoP markets. This study highlights that international entrepreneurs with limited resources can improve the lives of people at the BoP with their business operations and act as small-scale institutional entrepreneurs contributing to the development of the institutional environment of BoP markets.
Resumo:
This study investigated the impact of an instructional learning strategy, peer-led team learning (PLTL), on secondary school students' conceptual understanding of biology concepts related to the topic of evolution. Using a mixed methods approach, data were gathered quantitatively through pre/posttesting using a repeated measures design and qualitatively through observations, questionnaires, and interviews. A repeated measures design was implemented to explore the impact of PLTL on students' understanding of concepts related to evolution and students' attitudes towards PLTL implementation. Results from quantitative data comparing pre/posttesting were not able to be compared through inferential statistics as a result of inconsistencies in the data due to a small sample size and design limitations; however, qualitative data identified positive attitudes towards the implementation of PLTL, with students reporting gains in conceptual understanding, academic achievement, and interdependent work ethic. Implications of these findings for learning, teaching, and the educational literature include understanding of student attitudes towards PLTL and insight into the role PLTL plays in improving conceptual understanding of biology concepts. Strategies are suggested to continue further research in the area of PLTL.