962 resultados para construction innovation
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
This R package provides to sociologists (and related scientists) a toolbox to facilitate the construction of social position indicators from survey data. Social position indicators refer to what is commonly known as social class and social status. There exists in the sociological literature many theoretical conceptualisation and empirical operationalization of social class and social status. This first version of the package offers tools to construct the International Socio-Economic Index of Occupational Status (ISEI) and the Oesch social class schema. It also provides tools to convert several occupational classifications (PCS82, PCS03, and ISCO08) into a common one (ISCO88) to facilitate data harmonisation work, and tools to collapse (i.e. group) modalities of social position indicators.
Resumo:
This paper contains a joint ESHG/ASHG position document with recommendations regarding responsible innovation in prenatal screening with non-invasive prenatal testing (NIPT). By virtue of its greater accuracy and safety with respect to prenatal screening for common autosomal aneuploidies, NIPT has the potential of helping the practice better achieve its aim of facilitating autonomous reproductive choices, provided that balanced pretest information and non-directive counseling are available as part of the screening offer. Depending on the health-care setting, different scenarios for NIPT-based screening for common autosomal aneuploidies are possible. The trade-offs involved in these scenarios should be assessed in light of the aim of screening, the balance of benefits and burdens for pregnant women and their partners and considerations of cost-effectiveness and justice. With improving screening technologies and decreasing costs of sequencing and analysis, it will become possible in the near future to significantly expand the scope of prenatal screening beyond common autosomal aneuploidies. Commercial providers have already begun expanding their tests to include sex-chromosomal abnormalities and microdeletions. However, multiple false positives may undermine the main achievement of NIPT in the context of prenatal screening: the significant reduction of the invasive testing rate. This document argues for a cautious expansion of the scope of prenatal screening to serious congenital and childhood disorders, only following sound validation studies and a comprehensive evaluation of all relevant aspects. A further core message of this document is that in countries where prenatal screening is offered as a public health programme, governments and public health authorities should adopt an active role to ensure the responsible innovation of prenatal screening on the basis of ethical principles. Crucial elements are the quality of the screening process as a whole (including non-laboratory aspects such as information and counseling), education of professionals, systematic evaluation of all aspects of prenatal screening, development of better evaluation tools in the light of the aim of the practice, accountability to all stakeholders including children born from screened pregnancies and persons living with the conditions targeted in prenatal screening and promotion of equity of access.
Resumo:
Innovation is the word of this decade. According to innovation definitions, without positive sales impact and meaningful market share the company’s product or service has not been an innovation. Research problem of this master thesis is to find out what is the innovation process of complex new consumer products and services in new innovation paradigm. The objective is to get answers to two research questions: 1) What are the critical success factors what company should do when it is implementing the paradigm change in mass markets consumer business with complex products and services? 2) What is the process or framework one firm could follow? The research problem is looked from one company’s innovation creation process, networking and organization change management challenges point of views. Special focus is to look the research problem from an existing company perspective which is entering new business area. Innovation process management framework of complex new consumer products and services in new innovation paradigm has been created with support of several existing innovation theories. The new process framework includes the critical innovation process elements companies should take into consideration in their daily activities when they are in their new business innovation implementing process. Case company location based business implementation activities are studied via the new innovation process framework. This case study showed how important it is to manage the process, look how the target market and the competition in it is developing during company’s own innovation process, make decisions at right time and from beginning plan and implement the organization change management as one activity in the innovation process. In the end this master thesis showed that all companies need to create their own innovation process master plan with milestones and activities. One plan does not fit all, but all companies can start their planning from the new innovation process what was introduced in this master thesis.
Resumo:
This study explored ethnic identity among 410 mestizo students who were attending one of three universities, which varied in their ethnic composition and their educative model. One of these universities was private and had mostly mestizo students such as the public one did. The third educative context, also public, had an intercultural model of education and the students were mixed among mestizo and indigenous. The Multigroup Ethnic Identity Measure (MEIM) was administered to high school students in order to compare their scores on ethnic identity and its components: affi rmation, belonging or commitment and exploration. Principle components factor analysis with varimax rotation and tests of mean group differences are performed. The results showed signifi cant differences between the studied groups. Scores on ethnic identity and its components were signifi cantly higher among mestizos group from University with intercultural model of education than mestizos from public and private universities of the same region. Implications of these fi ndings for education are considered, as they are the strengths as well as the limitations of this research
Resumo:
Launched by representatives from the Union démocratique du centre (UDC) with the aim of circumventing political and judicial decisions made at both local and national levels, the 2009 federal popular initiative calling for a ban on the construction of minarets rekindled the stigmatisation of Muslims living in Switzerland. Within the prevalent institutional configuration it moreover revived controversies surrounding issues such as direct democracy versus fundamental rights, or "the will of the people" versus "the power of the judges", whether national or international. "Judicialisation" is a polysemous concept. It is not understood here as the transfer to the courts of matters of political significance - in this instance the public regulation of religion - but as a process of juridification (or juridicalisation) in which court rulings were constantly anticipated in the political debate provoked by the popular initiative.
Resumo:
The article discusses the development of WEBDATANET established in 2011 which aims to create a multidisciplinary network of web-based data collection experts in Europe. Topics include the presence of 190 experts in 30 European countries and abroad, the establishment of web-based teaching and discussion platforms and working groups and task forces. Also discussed is the scope of the research carried by WEBDATANET. In light of the growing importance of web-based data in the social and behavioral sciences, WEBDATANET was established in 2011 as a COST Action (IS 1004) to create a multidisciplinary network of web-based data collection experts: (web) survey methodologists, psychologists, sociologists, linguists, economists, Internet scientists, media and public opinion researchers. The aim was to accumulate and synthesize knowledge regarding methodological issues of web-based data collection (surveys, experiments, tests, non-reactive data, and mobile Internet research), and foster its scientific usage in a broader community.
Resumo:
The expansion of broadband speed and coverage over IP technology, which extend over transport and terminal access networks, has increased the demand for applications and content which by being provided over it, uniformly give rise to convergence. These shifts in technologies and enterprise business models are giving rise to the necessity for changing the perspective and scope of the Universal Service and of the regulation frameworks, with this last one based in the same principles as always but varying its application. Several aspects require special and renewed attention, such as the definition of relevant markets and dominant operators, the role of packages, interconnection of IP networks, network neutrality, the use of the spectrum with a vision of value for the citizenship, the application of the competition framework, new forms of licensing, treatment of the risk in the networks, changes in the regulatory authorities, amongst others. These matters are treated from the perspective of the actual trends in the world and its conceptual justification.
Resumo:
Local trajectories and arrangements play a significant role because the development of a research field, such as nanoscience and nanotechnology, requires substantial investments in human and instrumental resources. But why are there often concentrated in a limited number of places? What dynamics lead to such concentration? The hypothesis is that there is an assemblage of heterogeneous resources through the action of local actors. The chapter will explore, from an Actor Network Theory (ANT) perspective, how the local emergence of research dynamics from: the revival of local traditions, the local and national action of institutional entrepreneurs, controversial dynamics, and researchers' arrangements to involve other actors. It will examine how they connect up with each other and mutually commit themselves to the development of new technologies. It will focus on the role of narratives in this assembling: how were the local narratives of the past mobilized and to what effect.
Resumo:
The objective of this research was to study the role of key individuals in facilitation of technology enabled bottom-up innovation in large organization context. The development of innovation was followed from the point of view of individual actor (key individual) in two cases, through three levels: individual, team and organization, by using knowledge creation and innovation models. This study provides theoretical synthesis and framework through which the study is driven. The results of the study indicate, that in bottom-up initiated innovations the role of key individuals is still crucial, but innovation today is collective effort and there acts several entrepreneurial key individuals: innovator, user champion and organizational sponsor, whose collaboration and developing interaction drives innovation further. The team work is functional and fluent, but it meets great problems in interaction with organization. The large organizations should develop its practices and ability to react on emerging bottom-up initiations, in order to embed innovation to organization and gain sustainable innovation. In addition, bottom-up initiated innovations are demonstrations of peoples knowing, tacit knowledge and therefore renewing of an organization.
Resumo:
This article analyses the impact that innovation expenditure and intrasectoral and intersectoral externalities have on productivity in Spanish firms. While there is an extensive literature analysing the relationship between innovation and productivity, in this particular area there are far fewer studies that examine the importance of sectoral externalities, especially with the focus on Spain. One novelty of the study, which covers the industrial and service sectors, is that we also consider jointly the technology level of the sector in which the firm operates and the firm size. The database used is the Technological Innovation Panel, PITEC, which includes 12,813 firms for the year 2008 and has been little used in this type of study. The estimation method used is Iteratively Reweighted Least Squares method, IRLS, which is very useful for obtaining robust estimations in the presence of outliers. The results confirm that innovation has a positive effect on productivity, especially in high-tech and large firms. The impact of externalities is more heterogeneous because, while intrasectoral externalities have a poitive and significant effect, especially in low-tech firms independently of size, intersectoral externalities have a more ambiguous effect, being clearly significant for advanced industries in which size has a positive effect.