879 resultados para Conflict-based method
Resumo:
Objective In this study, we have used a chemometrics-based method to correlate key liposomal adjuvant attributes with in-vivo immune responses based on multivariate analysis. Methods The liposomal adjuvant composed of the cationic lipid dimethyldioctadecylammonium bromide (DDA) and trehalose 6,6-dibehenate (TDB) was modified with 1,2-distearoyl-sn-glycero-3-phosphocholine at a range of mol% ratios, and the main liposomal characteristics (liposome size and zeta potential) was measured along with their immunological performance as an adjuvant for the novel, postexposure fusion tuberculosis vaccine, Ag85B-ESAT-6-Rv2660c (H56 vaccine). Partial least square regression analysis was applied to correlate and cluster liposomal adjuvants particle characteristics with in-vivo derived immunological performances (IgG, IgG1, IgG2b, spleen proliferation, IL-2, IL-5, IL-6, IL-10, IFN-γ). Key findings While a range of factors varied in the formulations, decreasing the 1,2-distearoyl-sn-glycero-3-phosphocholine content (and subsequent zeta potential) together built the strongest variables in the model. Enhanced DDA and TDB content (and subsequent zeta potential) stimulated a response skewed towards a cell mediated immunity, with the model identifying correlations with IFN-γ, IL-2 and IL-6. Conclusion This study demonstrates the application of chemometrics-based correlations and clustering, which can inform liposomal adjuvant design.
Resumo:
Signal processing is an important topic in technological research today. In the areas of nonlinear dynamics search, the endeavor to control or order chaos is an issue that has received increasing attention over the last few years. Increasing interest in neural networks composed of simple processing elements (neurons) has led to widespread use of such networks to control dynamic systems learning. This paper presents backpropagation-based neural network architecture that can be used as a controller to stabilize unsteady periodic orbits. It also presents a neural network-based method for transferring the dynamics among attractors, leading to more efficient system control. The procedure can be applied to every point of the basin, no matter how far away from the attractor they are. Finally, this paper shows how two mixed chaotic signals can be controlled using a backpropagation neural network as a filter to separate and control both signals at the same time. The neural network provides more effective control, overcoming the problems that arise with control feedback methods. Control is more effective because it can be applied to the system at any point, even if it is moving away from the target state, which prevents waiting times. Also control can be applied even if there is little information about the system and remains stable longer even in the presence of random dynamic noise.
Resumo:
Background - The intimate relationship between dogs and their owners has the potential to increase the risk of human exposure to bacterial pathogens. Over the past 40 years, there have been several reports on transmission of salmonellae from dogs to humans. This study therefore aimed to determine the prevalence of Salmonella in the faeces of dogs from the Midlands region of the United Kingdom to assess exposure risk and potential for zoonotic transmission. Results - A total of 436 apparently healthy dogs without diarrhoea from households (n = 126), rescue centres (n = 96), boarding kennels (n = 43), retired greyhound kennels (n = 39) and a pet nutrition facility (n = 132) were investigated for Salmonella shedding. Faecal samples were processed by an enrichment culture based method. The faeces from one dog (0.23 %; 95 % confidence limit 0.006 %, 1.27 %) was positive for Salmonella. The species was S. enterica subspecies arizonae. Conclusion - This study showed that the prevalence of Salmonella from faeces from apparently healthy dogs from a variety of housing conditions is low; however, Salmonella shedding was still identified.
Resumo:
Több mint száz éve született meg Henry Gantt (Gantt, 1910) sávos ütemterve, Kelley (Kelley, 1961) és Walker (Walker, 1959) is több mint hatvan éve publikálta kritikus út módszerét. Az ezekre épülő költség- és erőforrás- tervezési módszerek vajon alkalmasak-e a ma kihívásaira? Az olvasó ebben a tanulmányban többéves kutatómunka gyümölcsét láthatja. A kutatás során az egyik legfontosabb cél annak vizsgálata volt, hogy a meglévő projekttervezési eszközök mennyiben felelnek meg a mai projektek kihívásainak; hol és milyen területen van szükség e módszerek továbbfejlesztésére, esetleg meghaladására. Ebben a tanulmányban a szerző olyan módszereket mutat be, amelyek messze túlvezetnek bennünket a projekttervezés eddig elsősorban operatív feladatokra szorítkozó módszereitől, és olyan kérdések megválaszolására fordítja figyelmünket, mint pl. milyen tevékenységeket, projekteket valósítsunk meg; melyeket hagyjuk el vagy ütemezzük be egy későbbi projektbe; hogyan rangsoroljuk, priorizáljuk a projektek megvalósítását, fontosságát? ______ Gantt chart (Gantt, 1910) was born by Henry Gantt more than a hundred years ago. Kelley and Walker published their critical planning method more than a 60 years ago (see i.e. Kelley-Walker, 1959). Can we use methods based on network planning methods for the challenges of 21st century? In this paper the author can see the results of the recent researches. In this study with their colleagues he investigated which project planning methods can be used in challenges of the 21st century and where and how to improve them. In these researches new matrix-based project planning methods are specified, where they can deal not only operative but strategic questions: which subprojects/tasks should be completed, how to treat priorities of completion in case of defining logic planning, how to support not only traditional but agile project management approaches.In this paper he introduces a new matrix-based method, which can be used for ranking project or multi project scenarios with different kinds of target functions. The author shows methods that are used in an expert module. He shows how to integrate this expert module into the traditional PMS system.
Resumo:
The nation's freeway systems are becoming increasingly congested. A major contribution to traffic congestion on freeways is due to traffic incidents. Traffic incidents are non-recurring events such as accidents or stranded vehicles that cause a temporary roadway capacity reduction, and they can account for as much as 60 percent of all traffic congestion on freeways. One major freeway incident management strategy involves diverting traffic to avoid incident locations by relaying timely information through Intelligent Transportation Systems (ITS) devices such as dynamic message signs or real-time traveler information systems. The decision to divert traffic depends foremost on the expected duration of an incident, which is difficult to predict. In addition, the duration of an incident is affected by many contributing factors. Determining and understanding these factors can help the process of identifying and developing better strategies to reduce incident durations and alleviate traffic congestion. A number of research studies have attempted to develop models to predict incident durations, yet with limited success. ^ This dissertation research attempts to improve on this previous effort by applying data mining techniques to a comprehensive incident database maintained by the District 4 ITS Office of the Florida Department of Transportation (FDOT). Two categories of incident duration prediction models were developed: "offline" models designed for use in the performance evaluation of incident management programs, and "online" models for real-time prediction of incident duration to aid in the decision making of traffic diversion in the event of an ongoing incident. Multiple data mining analysis techniques were applied and evaluated in the research. The multiple linear regression analysis and decision tree based method were applied to develop the offline models, and the rule-based method and a tree algorithm called M5P were used to develop the online models. ^ The results show that the models in general can achieve high prediction accuracy within acceptable time intervals of the actual durations. The research also identifies some new contributing factors that have not been examined in past studies. As part of the research effort, software code was developed to implement the models in the existing software system of District 4 FDOT for actual applications. ^
Resumo:
This dissertation attempts to unravel why and how postcolonial Trinidad has displayed relative stability in spite of the presence of the factors that have produced conflict and instability in other postcolonial societies.^ Trinidad's distinctive social formation began in the colonial period with a unique politics of culture among the landowning European groups, Anglican English and French Creole. Contrary to the materialist assumption of landowners' class solidarity, the development of Trinidad's plantation economy into two crops, each controlled by a separate European ethno-religious faction, impeded the integration and subsequent ideological domination of European-Christians. Throughout the nineteenth century neither group dominated the other, nor did they fuse into a single ruling class. The dynamics between them both generated recurring conflict while simultaneously creating mechanisms that limited conflict. ^ Based on original in-depth fieldwork and historical analysis, the dissertation proceeds to demonstrate that Trinidad's unique intra-class conflict within the dominant European population has produced hyphenated, as opposed to hybridized cultural elements. Supplementing the historical analysis with empirical examinations of contemporary inter-religious rituals and post-colonial politics this dissertation argues that social integration is inseparable from the question of inter-cultural mixture or articulation. In Trinidad, however, the resulting combination of distinct cultural elements is neither a "plural society" (M.G. Smith 1965; Despres 1967) nor an integrated totality in the structural-functionalistic sense (R.T. Smith 1962; Braithwaite 1967). Moreover, Trinidad does not conform to the post-structural framework's depiction of the social linkage between power and culture. The concept of cultural hybridization is equally misleading in the case of Trinidad. The underlying assumption of a monolithic European population's cultural hegemony and post-structural analysis's almost exclusive focus on the inter -class politics of culture seriously misrepresent and misunderstand Trinidadian cultural and its associated social and political relations. The dissertation examines this reflexive influence of culture not as an instrument of the powerful few but as an autonomous force that reproduces social divisions, yet restrains conflict.^
Resumo:
The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.
Resumo:
The goal of the power monitoring in electrical power systems is to promote the reliablility as well as the quality of electrical power.Therefore, this dissertation proposes a new theory of power based on wavelet transform for real-time estimation of RMS voltages and currents, and some power amounts, such as active power, reactive power, apparent power, and power factor. The appropriate estimation the of RMS and power values is important for many applications, such as: design and analysis of power systems, compensation devices for improving power quality, and instruments for energy measuring. Simulation and experimental results obtained through the proposed MaximalOverlap Discrete Wavelet Transform-based method were compared with the IEEE Standard 1459-2010 and the commercial oscilloscope, respectively, presenting equivalent results. The proposed method presented good performance for compact mother wavelet, which is in accordance with real-time applications.
Resumo:
The great interest in nonlinear system identification is mainly due to the fact that a large amount of real systems are complex and need to have their nonlinearities considered so that their models can be successfully used in applications of control, prediction, inference, among others. This work evaluates the application of Fuzzy Wavelet Neural Networks (FWNN) to identify nonlinear dynamical systems subjected to noise and outliers. Generally, these elements cause negative effects on the identification procedure, resulting in erroneous interpretations regarding the dynamical behavior of the system. The FWNN combines in a single structure the ability to deal with uncertainties of fuzzy logic, the multiresolution characteristics of wavelet theory and learning and generalization abilities of the artificial neural networks. Usually, the learning procedure of these neural networks is realized by a gradient based method, which uses the mean squared error as its cost function. This work proposes the replacement of this traditional function by an Information Theoretic Learning similarity measure, called correntropy. With the use of this similarity measure, higher order statistics can be considered during the FWNN training process. For this reason, this measure is more suitable for non-Gaussian error distributions and makes the training less sensitive to the presence of outliers. In order to evaluate this replacement, FWNN models are obtained in two identification case studies: a real nonlinear system, consisting of a multisection tank, and a simulated system based on a model of the human knee joint. The results demonstrate that the application of correntropy as the error backpropagation algorithm cost function makes the identification procedure using FWNN models more robust to outliers. However, this is only achieved if the gaussian kernel width of correntropy is properly adjusted.
Resumo:
Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.
This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.
In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.
Resumo:
This paper introduces a screw theory based method termed constraint and position identification (CPI) approach to synthesize decoupled spatial translational compliant parallel manipulators (XYZ CPMs) with consideration of actuation isolation. The proposed approach is based on a systematic arrangement of rigid stages and compliant modules in a three-legged XYZ CPM system using the constraint spaces and the position spaces of the compliant modules. The constraint spaces and the position spaces are firstly derived based on the screw theory instead of using the rigid-body mechanism design experience. Additionally, the constraint spaces are classified into different constraint combinations, with typical position spaces depicted via geometric entities. Furthermore, the systematic synthesis process based on the constraint combinations and the geometric entities is demonstrated via several examples. Finally, several novel decoupled XYZ CPMs with monolithic configurations are created and verified by finite elements analysis. The present CPI approach enables experts and beginners to synthesize a variety of decoupled XYZ CPMs with consideration of actuation isolation by selecting an appropriate constraint and an optimal position for each of the compliant modules according to a specific application.
Resumo:
We present fast functional photoacoustic microscopy (PAM) for three-dimensional high-resolution, high-speed imaging of the mouse brain, complementary to other imaging modalities. We implemented a single-wavelength pulse-width-based method with a one-dimensional imaging rate of 100 kHz to image blood oxygenation with capillary-level resolution. We applied PAM to image the vascular morphology, blood oxygenation, blood flow and oxygen metabolism in both resting and stimulated states in the mouse brain.
Resumo:
This thesis deals with the evaporation of non-ideal liquid mixtures using a multicomponent mass transfer approach. It develops the concept of evaporation maps as a convenient way of representing the dynamic composition changes of ternary mixtures during an evaporation process. Evaporation maps represent the residual composition of evaporating ternary non-ideal mixtures over the full range of composition, and are analogous to the commonly-used residue curve maps of simple distillation processes. The evaporation process initially considered in this work involves gas-phase limited evaporation from a liquid or wetted-solid surface, over which a gas flows at known conditions. Evaporation may occur into a pure inert gas, or into one pre-loaded with a known fraction of one of the ternary components. To explore multicomponent masstransfer effects, a model is developed that uses an exact solution to the Maxwell-Stefan equations for mass transfer in the gas film, with a lumped approach applied to the liquid phase. Solutions to the evaporation model take the form of trajectories in temperaturecomposition space, which are then projected onto a ternary diagram to form the map. Novel algorithms are developed for computation of pseudo-azeotropes in the evaporating mixture, and for calculation of the multicomponent wet-bulb temperature at a given liquid composition. A numerical continuation method is used to track the bifurcations which occur in the evaporation maps, where the composition of one component of the pre-loaded gas is the bifurcation parameter. The bifurcation diagrams can in principle be used to determine the required gas composition to produce a specific terminal composition in the liquid. A simple homotopy method is developed to track the locations of the various possible pseudo-azeotropes in the mixture. The stability of pseudo-azeotropes in the gas-phase limited case is examined using a linearized analysis of the governing equations. Algorithms for the calculation of separation boundaries in the evaporation maps are developed using an optimization-based method, as well as a method employing eigenvectors derived from the linearized analysis. The flexure of the wet-bulb temperature surface is explored, and it is shown how evaporation trajectories cross ridges and valleys, so that ridges and valleys of the surface do not coincide with separation boundaries. Finally, the assumption of gas-phase limited mass transfer is relaxed, by employing a model that includes diffusion in the liquid phase. A finite-volume method is used to solve the system of partial differential equations that results. The evaporation trajectories for the distributed model reduce to those of the lumped (gas-phase limited) model as the diffusivity in the liquid increases; under the same gas-phase conditions the permissible terminal compositions of the distributed and lumped models are the same.
Resumo:
El presente trabajo se centra en el estudio del papel que juega el control visual del espacio en las prácticas sociales de las comunidades prehistóricas. Este trabajo se articula a partir de un estudio de caso, el término municipal de Calviá, situado en el sureste de la isla de Mallorca, para analizar las diferentes formas de monumentalidad arquitectónica y cómo estas se constituyen cómo un punto de referencia social dentro del paisaje. Partiendo de una amplia horquilla temporal, que abarcaría el Bronce Naviforme (1550-850 AC), el período Talayótico (850-550 AC) y el Postalayótico (550-123 AC), se propone analizar los cambios y pervivencias en la construcción del paisaje, a través de estrategias de visibilidad, percepción y movimiento alrededor de los monumentos arquitectónicos. A través de la perspectiva de la Arqueología del Paisaje y mediante el uso de Sistemas de Información Geográfica (SIG) se propone un análisis de tendencias a largo plazo en la configuración social de un paisaje.
Resumo:
In this study, the authors propose simple methods to evaluate the achievable rates and outage probability of a cognitive radio (CR) link that takes into account the imperfectness of spectrum sensing. In the considered system, the CR transmitter and receiver correlatively sense and dynamically exploit the spectrum pool via dynamic frequency hopping. Under imperfect spectrum sensing, false-alarm and miss-detection occur which cause impulsive interference emerged from collisions due to the simultaneous spectrum access of primary and cognitive users. That makes it very challenging to evaluate the achievable rates. By first examining the static link where the channel is assumed to be constant over time, they show that the achievable rate using a Gaussian input can be calculated accurately through a simple series representation. In the second part of this study, they extend the calculation of the achievable rate to wireless fading environments. To take into account the effect of fading, they introduce a piece-wise linear curve fitting-based method to approximate the instantaneous achievable rate curve as a combination of linear segments. It is then demonstrated that the ergodic achievable rate in fast fading and the outage probability in slow fading can be calculated to achieve any given accuracy level.