829 resultados para Two Approaches
Resumo:
This review aims to concisely chart the development of two individual research fields, namely nanomedicines, with specific emphasis on nanoparticles (NP) and microparticles (MP), and microneedle (MN) technologies, which have, in the recent past, been exploited in combinatorial approaches for the efficient delivery of a variety of medicinal agents across the skin. This is an emerging and exciting area of pharmaceutical sciences research within the remit of transdermal drug delivery and as such will undoubtedly continue to grow with the emergence of new formulation and fabrication methodologies for particles and MN. Firstly, the fundamental aspects of skin architecture and structure are outlined, with particular reference to their influence on NP and MP penetration. Following on from this, a variety of different particles are described, as are the diverse range of MN modalities currently under development. The review concludes by highlighting some of the novel delivery systems which have been described in the literature exploiting these two approaches and directs the reader towards emerging uses for nanomedicines in combination with MN.
Resumo:
A aplicação de simulações de mecânica e dinâmica molecular ao estudo de sistemas supramoleculares tem adquirido, ao longo dos últimos anos, enorme relevância. A sua utilização não só tem levado a uma melhor compreensão dos mecanismos de formação desses mesmos sistemas, como também tem fornecido um meio para o desenvolvimento de novas arquitecturas supramoleculares. Nesta tese são descritos os trabalhos de mecânica e dinâmica molecular desenvolvidos no âmbito do estudo de associações supramoleculares entre aniões e receptores sintéticos do tipo [2]catenano, [2]rotaxano e pseudorotaxano. São ainda estudados complexos supramoleculares envolvendo receptores heteroditópicos do tipo calix[4]diquinona e pares iónicos formados por aniões halogeneto e catiões alcalinos e amónio. Os estudos aqui apresentados assentam essencialmente em duas vertentes: no estudo das propriedades dinâmicas em solução dos vários complexos supramoleculares considerados e no cálculo das energias livres de Gibbs de associação relativas dos vários iões aos receptores sintéticos. As metodologias utilizadas passaram por dinâmica molecular convencional e REMD (Replica Exchange Molecular Dynamics), para o estudo das propriedades em solução, e por cálculos de integração termodinâmica e MMPBSA (Molecular Mechanics – Poisson Boltzmann Surface Area), para a computação das energias livres de associação relativas. Os resultados obtidos, além de terem permitido uma visão mais detalhada dos mecanismos envolvidos no reconhecimento e associação dos vários receptores aos aniões e pares iónicos abordados, encontram-se, globalmente, de acordo com os análogos determinados experimentalmente, validando assim as metodologias empregadas. Em jeito de conclusão, investigou-se ainda a capacidade de um dos receptores heteroditópicos estudados para assistir favoravelmente na migração do par iónico KCl através da interface água-clorofórmio. Para tal, foram utilizadas simulações SMD (Steered Molecular Dynamics) para a computação do perfil de energia livre de Gibbs associada à migração do par iónico através da interface.
Resumo:
The performance of real-time networks is under continuous improvement as a result of several trends in the digital world. However, these tendencies not only cause improvements, but also exacerbates a series of unideal aspects of real-time networks such as communication latency, jitter of the latency and packet drop rate. This Thesis focuses on the communication errors that appear on such realtime networks, from the point-of-view of automatic control. Specifically, it investigates the effects of packet drops in automatic control over fieldbuses, as well as the architectures and optimal techniques for their compensation. Firstly, a new approach to address the problems that rise in virtue of such packet drops, is proposed. This novel approach is based on the simultaneous transmission of several values in a single message. Such messages can be from sensor to controller, in which case they are comprised of several past sensor readings, or from controller to actuator in which case they are comprised of estimates of several future control values. A series of tests reveal the advantages of this approach. The above-explained approach is then expanded as to accommodate the techniques of contemporary optimal control. However, unlike the aforementioned approach, that deliberately does not send certain messages in order to make a more efficient use of network resources; in the second case, the techniques are used to reduce the effects of packet losses. After these two approaches that are based on data aggregation, it is also studied the optimal control in packet dropping fieldbuses, using generalized actuator output functions. This study ends with the development of a new optimal controller, as well as the function, among the generalized functions that dictate the actuator’s behaviour in the absence of a new control message, that leads to the optimal performance. The Thesis also presents a different line of research, related with the output oscillations that take place as a consequence of the use of classic co-design techniques of networked control. The proposed algorithm has the goal of allowing the execution of such classical co-design algorithms without causing an output oscillation that increases the value of the cost function. Such increases may, under certain circumstances, negate the advantages of the application of the classical co-design techniques. A yet another line of research, investigated algorithms, more efficient than contemporary ones, to generate task execution sequences that guarantee that at least a given number of activated jobs will be executed out of every set composed by a predetermined number of contiguous activations. This algorithm may, in the future, be applied to the generation of message transmission patterns in the above-mentioned techniques for the efficient use of network resources. The proposed task generation algorithm is better than its predecessors in the sense that it is capable of scheduling systems that cannot be scheduled by its predecessor algorithms. The Thesis also presents a mechanism that allows to perform multi-path routing in wireless sensor networks, while ensuring that no value will be counted in duplicate. Thereby, this technique improves the performance of wireless sensor networks, rendering them more suitable for control applications. As mentioned before, this Thesis is centered around techniques for the improvement of performance of distributed control systems in which several elements are connected through a fieldbus that may be subject to packet drops. The first three approaches are directly related to this topic, with the first two approaching the problem from an architectural standpoint, whereas the third one does so from more theoretical grounds. The fourth approach ensures that the approaches to this and similar problems that can be found in the literature that try to achieve goals similar to objectives of this Thesis, can do so without causing other problems that may invalidate the solutions in question. Then, the thesis presents an approach to the problem dealt with in it, which is centered in the efficient generation of the transmission patterns that are used in the aforementioned approaches.
Resumo:
We consider some problems of the calculus of variations on time scales. On the beginning our attention is paid on two inverse extremal problems on arbitrary time scales. Firstly, using the Euler-Lagrange equation and the strengthened Legendre condition, we derive a general form for a variation functional that attains a local minimum at a given point of the vector space. Furthermore, we prove a necessary condition for a dynamic integro-differential equation to be an Euler-Lagrange equation. New and interesting results for the discrete and quantum calculus are obtained as particular cases. Afterwards, we prove Euler-Lagrange type equations and transversality conditions for generalized infinite horizon problems. Next we investigate the composition of a certain scalar function with delta and nabla integrals of a vector valued field. Euler-Lagrange equations in integral form, transversality conditions, and necessary optimality conditions for isoperimetric problems, on an arbitrary time scale, are proved. In the end, two main issues of application of time scales in economic, with interesting results, are presented. In the former case we consider a firm that wants to program its production and investment policies to reach a given production rate and to maximize its future market competitiveness. The model which describes firm activities is studied in two different ways: using classical discretizations; and applying discrete versions of our result on time scales. In the end we compare the cost functional values obtained from those two approaches. The latter problem is more complex and relates to rate of inflation, p, and rate of unemployment, u, which inflict a social loss. Using known relations between p, u, and the expected rate of inflation π, we rewrite the social loss function as a function of π. We present this model in the time scale framework and find an optimal path π that minimizes the total social loss over a given time interval.
Resumo:
The production of color/flavor compounds in wine is the result of different interrelated mechanism reactions. Among these, the oxidation phenomenon and the Maillard reaction stands out with particular relevance due to their large impact on the sensory quality of wines and consequently on the product shelflife. The aim of this thesis is to achieve a global vision of wine degradation mechanisms. The identification of mediators’ reactions involved in oxidative browning and aromatic degradation will be attempted based on different detectors. Two approaches are implemented in this work: a “non-target” approach by which relevant analytical tools will be used to merge the information of cyclic voltammetry and Diode-Array (DAD) detectors, allowing a broader overview of the system and the note of interesting compounds, and a “target” approach by which the identification and quantification of the different compounds related to the wine degradation process will be performed using different detectors (HPLC-UV/Vis, LC-MS, GC-MS, and FID). Two different patterns of degradation will be used in this study: wines generated by O2 and temperature perturbations, and synthetic solutions with relevant wine constituents for mechanisms validation. Results clearly demonstrate a “convolution” of chemical mechanisms. The presence of oxygen combined with temperature had a synergistic effect on the formation of several key odorant compounds.The results of this work could be translated to the wine-making and wine-storage environment from the modelling of the analysed compounds.
Resumo:
Dependence clusters are (maximal) collections of mutually dependent source code entities according to some dependence relation. Their presence in software complicates many maintenance activities including testing, refactoring, and feature extraction. Despite several studies finding them common in production code, their formation, identification, and overall structure are not well understood, partly because of challenges in approximating true dependences between program entities. Previous research has considered two approximate dependence relations: a fine-grained statement-level relation using control and data dependences from a program’s System Dependence Graph and a coarser relation based on function-level controlflow reachability. In principal, the first is more expensive and more precise than the second. Using a collection of twenty programs, we present an empirical investigation of the clusters identified by these two approaches. In support of the analysis, we consider hybrid cluster types that works at the coarser function-level but is based on the higher-precision statement-level dependences. The three types of clusters are compared based on their slice sets using two clustering metrics. We also perform extensive analysis of the programs to identify linchpin functions – functions primarily responsible for holding a cluster together. Results include evidence that the less expensive, coarser approaches can often be used as e�ective proxies for the more expensive, finer-grained approaches. Finally, the linchpin analysis shows that linchpin functions can be e�ectively and automatically identified.
Resumo:
Tese de doutoramento, Medicina (Neurocirurgia), Universidade de Lisboa, Faculdade de Medicina, 2014
Resumo:
Tese de doutoramento, Geografia (Geografia Física), Universidade de Lisboa, Instituto de Geografia e Ordenamento do Território, 2014
Resumo:
This paper explores the experiences of e-learners participating in continuing professional development programmes in three UK universities. Data was collected using questionnaires, discussion group postings and informal telephone interviews. These were analysed using two approaches to content analysis: a coding scheme and metaphors. Findings indicated that: e-learners reconstruct their approaches to time management at an early stage in their programme; the e-learners developed different time management strategies (planned, opportunistic, planned/opportunistic); and metaphors illustrated their underlying experiences of time. These provide the basis of recommendations for e-tutors. Finally, the paper explores methodological issues and outlines some implications for practice.
Resumo:
This paper reviews some aspects of corporate strategy in a well-known smart phone provider. Two approaches to strategy are analysed: one concerning the industry and the other related to the organization. A general introduction on the smart phones industry is given followed by specific background on BlackBerry. Two perspectives are explored: the first talks about the paradox of compliance and choice within the industry and the second discusses the paradox of control and chaos in BlackBerry. The paper concludes with a brief overview on the company performance from 2006 to 2012 leading to some recommendations.
Resumo:
Key feature of a context-aware application is the ability to adapt based on the change of context. Two approaches that are widely used in this regard are the context-action pair mapping where developers match an action to execute for a particular context change and the adaptive learning where a context-aware application refines its action over time based on the preceding action’s outcome. Both these approaches have limitation which makes them unsuitable in situations where a context-aware application has to deal with unknown context changes. In this paper we propose a framework where adaptation is carried out via concurrent multi-action evaluation of a dynamically created action space. This dynamic creation of the action space eliminates the need for relying on the developers to create context-action pairs and the concurrent multi-action evaluation reduces the adaptation time as opposed to the iterative approach used by adaptive learning techniques. Using our reference implementation of the framework we show how it could be used to dynamically determine the threshold price in an e-commerce system which uses the name-your-own-price (NYOP) strategy.
Resumo:
Existing Workflow Management Systems (WFMSs) follow a pragmatic approach. They often use a proprietary modelling language with an intuitive graphical layout. However the underlying semantics lack a formal foundation. As a consequence, analysis issues, such as proving correctness i.e. soundness and completeness, and reliable execution are not supported at design level. This project will be using an applied ontology approach by formally defining key terms such as process, sub-process, action/task based on formal temporal theory. Current business process modelling (BPM) standards such as Business Process Modelling Notation (BPMN) and Unified Modelling Language (UML) Activity Diagram (AD) model their constructs with no logical basis. This investigation will contribute to the research and industry by providing a framework that will provide grounding for BPM to reason and represent a correct business process (BP). This is missing in the current BPM domain, and may result in reduction of the design costs and avert the burden of redundant terms used by the current standards. A graphical tool will be introduced which will implement the formal ontology defined in the framework. This new tool can be used both as a modelling tool and at the same time will serve the purpose of validating the model. This research will also fill the existing gap by providing a unified graphical representation to represent a BP in a logically consistent manner for the mainstream modelling standards in the fields of business and IT. A case study will be conducted to analyse a catalogue of existing ‘patient pathways’ i.e. processes, of King’s College Hospital NHS Trust including current performance statistics. Following the application of the framework, a mapping will be conducted, and new performance statistics will be collected. A cost/benefits analysis report will be produced comparing the results of the two approaches.
Resumo:
Existing Workflow Management Systems (WFMSs) follow a pragmatic approach. They often use a proprietary modelling language with an intuitive graphical layout. However the underlying semantics lack a formal foundation. As a consequence, analysis issues, such as proving correctness i.e. soundness and completeness, and reliable execution are not supported at design level. This project will be using an applied ontology approach by formally defining key terms such as process, sub-process, action/task based on formal temporal theory. Current business process modelling (BPM) standards such as Business Process Modelling Notation (BPMN) and Unified Modelling Language (UML) Activity Diagram (AD) model their constructs with no logical basis. This investigation will contribute to the research and industry by providing a framework that will provide grounding for BPM to reason and represent a correct business process (BP). This is missing in the current BPM domain, and may result in reduction of the design costs and avert the burden of redundant terms used by the current standards. A graphical tool will be introduced which will implement the formal ontology defined in the framework. This new tool can be used both as a modelling tool and at the same time will serve the purpose of validating the model. This research will also fill the existing gap by providing a unified graphical representation to represent a BP in a logically consistent manner for the mainstream modelling standards in the fields of business and IT. A case study will be conducted to analyse a catalogue of existing ‘patient pathways’ i.e. processes, of King’s College Hospital NHS Trust including current performance statistics. Following the application of the framework, a mapping will be conducted, and new performance statistics will be collected. A cost/benefits analysis report will be produced comparing the results of the two approaches.
Resumo:
With the electricity market liberalization, the distribution and retail companies are looking for better market strategies based on adequate information upon the consumption patterns of its electricity consumers. A fair insight on the consumers’ behavior will permit the definition of specific contract aspects based on the different consumption patterns. In order to form the different consumers’ classes, and find a set of representative consumption patterns we use electricity consumption data from a utility client’s database and two approaches: Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) for combining partitions in a clustering ensemble. While EAC uses a voting mechanism to produce a co-association matrix based on the pairwise associations obtained from N partitions and where each partition has equal weight in the combination process, the WEACS approach uses subsampling and weights differently the partitions. As a complementary step to the WEACS approach, we combine the partitions obtained in the WEACS approach with the ALL clustering ensemble construction method and we use the Ward Link algorithm to obtain the final data partition. The characterization of the obtained consumers’ clusters was performed using the C5.0 classification algorithm. Experiment results showed that the WEACS approach leads to better results than many other clustering approaches.
Resumo:
The concept of demand response has a growing importance in the context of the future power systems. Demand response can be seen as a resource like distributed generation, storage, electric vehicles, etc. All these resources require the existence of an infrastructure able to give players the means to operate and use them in an efficient way. This infrastructure implements in practice the smart grid concept, and should accommodate a large number of diverse types of players in the context of a competitive business environment. In this paper, demand response is optimally scheduled jointly with other resources such as distributed generation units and the energy provided by the electricity market, minimizing the operation costs from the point of view of a virtual power player, who manages these resources and supplies the aggregated consumers. The optimal schedule is obtained using two approaches based on particle swarm optimization (with and without mutation) which are compared with a deterministic approach that is used as a reference methodology. A case study with two scenarios implemented in DemSi, a demand Response simulator developed by the authors, evidences the advantages of the use of the proposed particle swarm approaches.