959 resultados para Integration approach


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os objetivos a atingir em 2020 no que respeita ao processo de investigação e desenvolvimento de medicamentos estão claramente focados na redução em termos temporais na investigação pré-clínica e clínica e na diminuição da taxa de atrito entre as novas moléculas. De forma a atingir estes objetivos, um novo conceito tem sido desenvolvido e aplicado a este complexo e moroso processo, este é a Farmacologia Quantitativa e de Sistemas. Além disso, esta abordagem inovadora pode ser crucial para o tratamento de determinados tipos de tumores cerebrais letais – Glioblastoma Multiforme (GBM) – que permanecem um desafio terapêutico, e por tanto, uma doença com um destino fatal para os doentes. Por estas razões, esta dissertação de mestrado apresenta uma especial relevância, tendo por objetivos avaliar o potencial impacto e importância biológica da variação de parâmetros farmacológicos, para além da potência, no contexto da resposta celular ao fármaco, pela avaliação da perturbação induzida em células do GBM por inibidores do PDK1 e pela realização de uma caracterização multiparamêtrica dose-resposta destas novas moléculas. A presente dissertação assume em Portugal a vanguarda na área da Farmacologia Quantitativa e de Sistemas aplicada ao processo de investigação e desenvolvimento de medicamentos. Em última estância, esta dissertação poderá contribuir para uma melhor previsão dos fármacos durante este processo, significando assim possíveis vantagens para os utentes, indústrias farmacêuticas, institutos de investigação, governo e institutos superiores.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do Grau de Doutor em Engenharia do Ambiente pela Universidade Nova de Lisboa,Faculdade de Ciências e Tecnologia

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we consider the ATM networks in which the virtual path concept is implemented. The question of how to multiplex two or more diverse traffic classes while providing different quality of service requirements is a very complicated open problem. Two distinct options are available: integration and segregation. In an integration approach all the traffic from different connections are multiplexed onto one VP. This implies that the most restrictive QOS requirements must be applied to all services. Therefore, link utilization will be decreased because unnecessarily stringent QOS is provided to all connections. With the segregation approach the problem can be much simplified if different types of traffic are separated by assigning a VP with dedicated resources (buffers and links). Therefore, resources may not be efficiently utilized because no sharing of bandwidth can take place across the VP. The probability that the bandwidth required by the accepted connections exceeds the capacity of the link is evaluated with the probability of congestion (PC). Since the PC can be expressed as the CLP, we shall simply carry out bandwidth allocation using the PC. We first focus on the influence of some parameters (CLP, bit rate and burstiness) on the capacity required by a VP supporting a single traffic class using the new convolution approach. Numerical results are presented both to compare the required capacity and to observe which conditions under each approach are preferred

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we consider the ATM networks in which the virtual path concept is implemented. The question of how to multiplex two or more diverse traffic classes while providing different quality of service requirements is a very complicated open problem. Two distinct options are available: integration and segregation. In an integration approach all the traffic from different connections are multiplexed onto one VP. This implies that the most restrictive QOS requirements must be applied to all services. Therefore, link utilization will be decreased because unnecessarily stringent QOS is provided to all connections. With the segregation approach the problem can be much simplified if different types of traffic are separated by assigning a VP with dedicated resources (buffers and links). Therefore, resources may not be efficiently utilized because no sharing of bandwidth can take place across the VP. The probability that the bandwidth required by the accepted connections exceeds the capacity of the link is evaluated with the probability of congestion (PC). Since the PC can be expressed as the CLP, we shall simply carry out bandwidth allocation using the PC. We first focus on the influence of some parameters (CLP, bit rate and burstiness) on the capacity required by a VP supporting a single traffic class using the new convolution approach. Numerical results are presented both to compare the required capacity and to observe which conditions under each approach are preferred

Relevância:

70.00% 70.00%

Publicador:

Resumo:

If people monitor a visual stimulus stream for targets they often miss the second (T2) if it appears soon after the first (T1)-the attentional blink. There is one exception: T2 is often not missed if it appears right after T1, i.e., at lag 1. This lag-l sparing is commonly attributed to the possibility that T1 processing opens an attentional gate, which may be so sluggish that an early T2 can slip in before it closes. We investigated why the gate may close and exclude further stimuli from processing. We compared a control approach, which assumes that gate closing is exogenously triggered by the appearance of nontargets, and an integration approach, which assumes that gate closing is under endogenous control. As predicted by the latter but not the former, T2 performance and target reversals were strongly affected by the temporal distance between T1 and T2, whereas the presence or the absence of a nontarget intervening between T1 and T2 had little impact. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

When the Internet was born, the purpose was to interconnect computers to share digital data at large-scale. On the other hand, when embedded systems were born, the objective was to control system components under real-time constraints through sensing devices, typically at small to medium scales. With the great evolution of the Information and Communication Technology (ICT), the tendency is to enable ubiquitous and pervasive computing to control everything (physical processes and physical objects) anytime and at a large-scale. This new vision gave recently rise to the paradigm of Cyber-Physical Systems (CPS). In this position paper, we provide a realistic vision to the concept of the Cyber-Physical Internet (CPI), discuss its design requirements and present the limitations of the current networking abstractions to fulfill these requirements. We also debate whether it is more productive to adopt a system integration approach or a radical design approach for building large-scale CPS. Finally, we present a sample of realtime challenges that must be considered in the design of the Cyber-Physical Internet.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The EU has been one of the main actors involved in the construction process of an international climate change regime, adopting it as an identity sign in the international arena. This activism has reverted in the European political agenda and in the one of its Members States. Therefore, climate change has become a driver for the EU growing participation in energy policy and for its governance evolution. In this context, much attention has been paid to the climate and energy policies integration agreed after the 2007 spring European Council. Apparently, this decision meant a decisive step towards the incorporation of the environmental variable in the energy policy-making. Moreover, the Action Plan [2007-2009] “Energy Policy for Europe” outlined priority actions in a variety of energy-related areas, implying the new European Energy Policy commencement. Against this background, there is still much left to understand about its formulation and its further development. Rooted on the Environmental Policy Integration approach, this paper traces the increasing proximity between environment and energy policies in order to understand the green contribution to the European Energy Policy construction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Organizations often consider investing in a new Enterprise Resource Planning (ERP) system as a way to enhance their business processes, as it allows integrating information used by multiple different departments into a harmonized computing system. The hope of gaining significant business benefits, such as reducing operating costs, is the key reason why organizations have decided to invest in ERP systems since 1990’s. Still, all ERP projects do not end up in success, and deployment of ERP system does not necessarily guarantee the results people were waiting for. This research studies why organizations invest in ERP, but also what downsides ERP projects currently have. Additionally Enterprise Application Integrations (EAI) as next generation’s ERP solutions are studied to challenge and develop traditional ERP. The research questions are: What are the weaknesses in traditional ERP deployment in today’s business? How does the proposed next generation’s ERP answer to these weaknesses? At the beginning of the thesis, as an answer to the first research question, the basics of ERP implementation are introduced with both the pros and cons of investing in ERP. Key concepts such as IS integration and EAI are also studied. Empirical section of the thesis focuses on answering the second research question from the integration approach. A qualitative research is executed by interviewing five experienced IT professionals about EAI benefits, limitations, and problems. The thematic interview and questionnaire follow the presented ERP main elements from literature. The research shows that adopting traditional ERP includes multiple downsides, e.g. inflexibility and requiring big investments in terms of money. To avoid these critical issues, organizations could find a solution from integrations between their current IS. Based on the empirical study a new framework for the next generation’s ERP is created, consisting of a model and a framework that deal with various features regarding IS adoption. With this framework organizations can assess whether they should implement EAI or ERP. The model and framework suggest that there are multiple factors IT managers needs to consider when planning their IT investments, including their current IS, role of IT in the organization, as well as new system’s flexibility, investment level, and number of vendors. The framework created in the thesis encourages IT management to assess holistically their i) organization, ii) its IT, and iii) solution requirements in order to determine what kind of IS solution would suit their needs the best.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Organizations often consider investing in a new Enterprise Resource Planning (ERP) system as a way to enhance their business processes, as it allows integrating information used by multiple different departments into a harmonized computing system. The hope of gaining significant business benefits, such as reducing operating costs, is the key reason why organizations have decided to invest in ERP systems since 1990’s. Still, all ERP projects do not end up in success, and deployment of ERP system does not necessarily guarantee the results people were waiting for. This research studies why organizations invest in ERP, but also what downsides ERP projects currently have. Additionally Enterprise Application Integrations (EAI) as next generation’s ERP solutions are studied to challenge and develop traditional ERP. The research questions are: What are the weaknesses in traditional ERP deployment in today’s business? How does the proposed next generation’s ERP answer to these weaknesses? At the beginning of the thesis, as an answer to the first research question, the basics of ERP implementation are introduced with both the pros and cons of investing in ERP. Key concepts such as IS integration and EAI are also studied. Empirical section of the thesis focuses on answering the second research question from the integration approach. A qualitative research is executed by interviewing five experienced IT professionals about EAI benefits, limitations, and problems. The thematic interview and questionnaire follow the presented ERP main elements from literature. The research shows that adopting traditional ERP includes multiple downsides, e.g. inflexibility and requiring big investments in terms of money. To avoid these critical issues, organizations could find a solution from integrations between their current IS. Based on the empirical study a new framework for the next generation’s ERP is created, consisting of a model and a framework that deal with various features regarding IS adoption. With this framework organizations can assess whether they should implement EAI or ERP. The model and framework suggest that there are multiple factors IT managers needs to consider when planning their IT investments, including their current IS, role of IT in the organization, as well as new system’s flexibility, investment level, and number of vendors. The framework created in the thesis encourages IT management to assess holistically their i) organization, ii) its IT, and iii) solution requirements in order to determine what kind of IS solution would suit their needs the best.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este projeto de pesquisa propõe a realização de um estudo exploratório de múltiplos casos em organizações brasileiras. O tema a ser explorado será o 'jeitinho brasileiro', e suas múltiplas interpretações e maneiras de operacionalização nestes contextos sociais. Adotar-se-á uma abordagem múltipla ao conceito de cultura, que inclui a ambigüidade cultural, dentro de uma visão interpretativa, que vai além da abordagem de integração comum à maioria dos estudos sobre o tema. Pretende-se com a pesquisa (1) aprofundar as críticas que podem ser feitas aos estudos sobre o tema e (2) avançar o desenvolvimento de nossa compreensão das organizações como sistemas de significados socialmente construídos, por meio de uma abordagem mais ampla e completa ao fenômeno estudado.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The petrol industry has been investigated twice by the Monopolies and Mergers Commission in the last 20 years. On both occasions the MMC found that the conduct of the companies was not against the public interest. These findings were based on the perceived stable relationship between oil and petrol prices. This paper develops a model of petrol price using a co-integration approach, concluding that one must question the findings of the MMC.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current state of Russian databases for substances and materials properties was considered. A brief review of integration methods of given information systems was prepared and a distributed databases integration approach based on metabase was proposed. Implementation details were mentioned on the posed database on electronics materials integration approach. An operating pilot version of given integrated information system implemented at IMET RAS was considered.