997 resultados para Complexity processing
Resumo:
Durante toda la evolución de la tecnología, se han empleado aparatos interconexionados por cables. Los cables limitan la libertad de movimiento del usuario y pueden captar interferencias entre ellos si la red de cableado es elevada. Mientras avanzaba la tecnología inalámbrica, se ha ido adaptando al equipamiento electrónico a la vez que se iban haciendo cada vez más pequeños. Por esto, se impone la necesidad de utilizarlos como controles a distancia sin el empleo de cables debido a los inconvenientes que estos conllevan. El presente trabajo, pretende unificar tres tecnologías que pueden tener en el futuro una gran afinidad. · Dispositivos basados en el sistema Android. Desde sus inicios, han tenido una evolución meteórica. Se han ido haciendo cada vez más rápidos y mejores. · Sistemas inalámbricos. Los sistemas wifi o bluetooth, se han ido incorporando a nuestras vidas cada vez más y están prácticamente en cualquier aparato. · Robótica. Cualquier proceso de producción incorpora un robot. Son necesarios para hacer muchos trabajos que, aunque el hombre lo puede realizar, un robot reduce los tiempos y la peligrosidad de los procesos. Aunque las dos primeras tecnologías van unidas, ¿quién no tiene un teléfono con conexión wifi y bluetooth?, pocos diseños aúnan estos campos con la Robótica. El objetivo final de este trabajo es realizar una aplicación en Android para el control remoto de un robot, empleando el sistema de comunicación inalámbrico. La aplicación desarrollada, permite controlar el robot a conveniencia del usuario en un entorno táctil/teledirigido. Gracias a la utilización de simulador en ambos lenguajes (RAPID y Android), ha sido posible realizar la programación sin tener que estar presente ante el robot objeto de este trabajo. A través de su progreso, se ha ido evolucionando en la cantidad de datos enviados al robot y complejidad en su procesamiento, a la vez que se ha mejorado en la estética de la aplicación. Finalmente se usó la aplicación desarrollada con el robot, consiguiendo con éxito que realizara los movimientos que eran enviados con la tablet programada.
Resumo:
BACKGROUND: Alzheimer's disease (AD) is the most frequent form of dementia in the elderly and no effective treatment is currently available. The mechanisms triggering AD onset and progression are still imperfectly dissected. We aimed at deciphering the modifications occurring in vivo during the very early stages of AD, before the development of amyloid deposits, neurofibrillary tangles, neuronal death and inflammation. Most current AD models based on Amyloid Precursor Protein (APP) overproduction beginning from in utero, to rapidly reproduce the histological and behavioral features of the disease within a few months, are not appropriate to study the early steps of AD development. As a means to mimic in vivo amyloid APP processing closer to the human situation in AD, we used an adeno-associated virus (AAV)-based transfer of human mutant APP and Presenilin 1 (PS1) genes to the hippocampi of two-month-old C57Bl/6 J mice to express human APP, without significant overexpression and to specifically induce its amyloid processing. RESULTS: The human APP, βCTF and Aβ42/40 ratio were similar to those in hippocampal tissues from AD patients. Three months after injection the murine Tau protein was hyperphosphorylated and rapid synaptic failure occurred characterized by decreased levels of both PSD-95 and metabolites related to neuromodulation, on proton magnetic resonance spectroscopy ((1)H-MRS). Astrocytic GLT-1 transporter levels were lower and the tonic glutamatergic current was stronger on electrophysiological recordings of CA1 hippocampal region, revealing the overstimulation of extrasynaptic N-methyl D-aspartate receptor (NMDAR) which precedes the loss of long-term potentiation (LTP). These modifications were associated with early behavioral impairments in the Open-field, Y-maze and Morris Mater Maze tasks. CONCLUSIONS: Altogether, this demonstrates that an AD-like APP processing, yielding to levels of APP, βCTF and Aβ42/Aβ40 ratio similar to those observed in AD patients, are sufficient to rapidly trigger early steps of the amyloidogenic and Tau pathways in vivo. With this strategy, we identified a sequence of early events likely to account for disease onset and described a model that may facilitate efforts to decipher the factors triggering AD and to evaluate early neuroprotective strategies.
Resumo:
P-glycoprotein (Pgp), a protein codified by Multi Drug Resistance (MDR1) gene, has a detoxifying function and might influence the toxicity and pharmacokinetics and pharmacodynamics of drugs. Sampling strategies to improve Pgp studies could be useful to optimize the sensitivity and the reproducibility of efflux assays. This study aimed to compare Pgp expression and efflux activity by measuring Rhodamine123 (Rh123) retention in lymphocytes stored under different conditions, in order to evaluate the potential utility of any of the storing conditions in Pgp functionality. Our results show no change in protein expression of Pgp by confocal studies and Western blotting, nor changes at the mRNA level (qRT-PCR). No differences in Rh123 efflux by Pgp activity assays were found between fresh and frozen lymphocytes after 24 hours of blood extraction, using either of the two Pgp specific inhibitors (VP and PSC833). Different working conditions in the 24 hours post blood extraction do not affect Rh123 efflux. These results allow standardization of Pgp activity measurement in different individuals with different timing of blood sampling and in different geographic areas. _______________
Resumo:
Forensic intelligence has recently gathered increasing attention as a potential expansion of forensic science that may contribute in a wider policing and security context. Whilst the new avenue is certainly promising, relatively few attempts to incorporate models, methods and techniques into practical projects are reported. This work reports a practical application of a generalised and transversal framework for developing forensic intelligence processes referred to here as the Transversal model adapted from previous work. Visual features present in the images of four datasets of false identity documents were systematically profiled and compared using image processing for the detection of a series of modus operandi (M.O.) actions. The nature of these series and their relation to the notion of common source was evaluated with respect to alternative known information and inferences drawn regarding respective crime systems. 439 documents seized by police and border guard authorities across 10 jurisdictions in Switzerland with known and unknown source level links formed the datasets for this study. Training sets were developed based on both known source level data, and visually supported relationships. Performance was evaluated through the use of intra-variability and inter-variability scores drawn from over 48,000 comparisons. The optimised method exhibited significant sensitivity combined with strong specificity and demonstrates its ability to support forensic intelligence efforts.
Resumo:
Climate change affects the rate of insect invasions as well as the abundance, distribution and impacts of such invasions on a global scale. Among the principal analytical approaches to predicting and understanding future impacts of biological invasions are Species Distribution Models (SDMs), typically in the form of correlative Ecological Niche Models (ENMs). An underlying assumption of ENMs is that species-environment relationships remain preserved during extrapolations in space and time, although this is widely criticised. The semi-mechanistic modelling platform, CLIMEX, employs a top-down approach using species ecophysiological traits and is able to avoid some of the issues of extrapolation, making it highly applicable to investigating biological invasions in the context of climate change. The tephritid fruit flies (Diptera: Tephritidae) comprise some of the most successful invasive species and serious economic pests around the world. Here we project 12 tephritid species CLIMEX models into future climate scenarios to examine overall patterns of climate suitability and forecast potential distributional changes for this group. We further compare the aggregate response of the group against species-specific responses. We then consider additional drivers of biological invasions to examine how invasion potential is influenced by climate, fruit production and trade indices. Considering the group of tephritid species examined here, climate change is predicted to decrease global climate suitability and to shift the cumulative distribution poleward. However, when examining species-level patterns, the predominant directionality of range shifts for 11 of the 12 species is eastward. Most notably, management will need to consider regional changes in fruit fly species invasion potential where high fruit production, trade indices and predicted distributions of these flies overlap.
Resumo:
Peer-reviewed
Resumo:
Many aspects of human behavior are driven by rewards, yet different people are differentially sensitive to rewards and punishment. In this study, we showthat white matter microstructure inthe uncinate/inferiorfronto-occipitalfasciculus, defined byfractional anisotropy values derived from diffusion tensor magnetic resonance images, correlates with both short-term (indexed by the fMRI blood oxygenation level-dependent response to reward in the nucleus accumbens) and long-term (indexed by the trait measure sensitivity to punishment) reactivityto rewards.Moreover,traitmeasures of reward processingwere also correlatedwith reward-relatedfunctional activation in the nucleus accumbens. The white matter tract revealed by the correlational analysis connects the anterior temporal lobe with the medial and lateral orbitofrontal cortex and also supplies the ventral striatum. The pattern of strong correlations suggests an intimate relationship betweenwhitematter structure and reward-related behaviorthatmay also play a rolein a number of pathological conditions, such as addiction and pathological gambling.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
As a result of the growing interest in studying employee well-being as a complex process that portrays high levels of within-individual variability and evolves over time, this present study considers the experience of flow in the workplace from a nonlinear dynamical systems approach. Our goal is to offer new ways to move the study of employee well-being beyond linear approaches. With nonlinear dynamical systems theory as the backdrop, we conducted a longitudinal study using the experience sampling method and qualitative semi-structured interviews for data collection; 6981 registers of data were collected from a sample of 60 employees. The obtained time series were analyzed using various techniques derived from the nonlinear dynamical systems theory (i.e., recurrence analysis and surrogate data) and multiple correspondence analyses. The results revealed the following: 1) flow in the workplace presents a high degree of within-individual variability; this variability is characterized as chaotic for most of the cases (75%); 2) high levels of flow are associated with chaos; and 3) different dimensions of the flow experience (e.g., merging of action and awareness) as well as individual (e.g., age) and job characteristics (e.g., job tenure) are associated with the emergence of different dynamic patterns (chaotic, linear and random).
Resumo:
Neural signal processing is a discipline within neuroengineering. This interdisciplinary approach combines principles from machine learning, signal processing theory, and computational neuroscience applied to problems in basic and clinical neuroscience. The ultimate goal of neuroengineering is a technological revolution, where machines would interact in real time with the brain. Machines and brains could interface, enabling normal function in cases of injury or disease, brain monitoring, and/or medical rehabilitation of brain disorders. Much current research in neuroengineering is focused on understanding the coding and processing of information in the sensory and motor systems, quantifying how this processing is altered in the pathological state, and how it can be manipulated through interactions with artificial devices including brain–computer interfaces and neuroprosthetics.
Resumo:
In the context of autonomous sensors powered by small-size photovoltaic (PV) panels, this work analyses how the efficiency of DC/DC-converter-based power processing circuits can be improved by an appropriate selection of the inductor current that transfers the energy from the PV panel to a storage unit. Each component of power losses (fixed, conduction and switching losses) involved in the DC/DC converter specifically depends on the average inductor current so that there is an optimal value of this current that causes minimal losses and, hence, maximum efficiency. Such an idea has been tested experimentally using two commercial DC/DC converters whose average inductor current is adjustable. Experimental results show that the efficiency can be improved up to 12% by selecting an optimal value of that current, which is around 300-350 mA for such DC/DC converters.
Resumo:
The heated debate over whether there is only a single mechanism or two mechanisms for morphology has diverted valuable research energy away from the more critical questions about the neural computations involved in the comprehension and production of morphologically complex forms. Cognitive neuroscience data implicate many brain areas. All extant models, whether they rely on a connectionist network or espouse two mechanisms, are too underspecified to explain why more than a few brain areas differ in their activity during the processing of regular and irregular forms. No one doubts that the brain treats regular and irregular words differently, but brain data indicate that a simplistic account will not do. It is time for us to search for the critical factors free from theoretical blinders.
Resumo:
A novel unsymmetric dinucleating ligand (LN3N4) combining a tridentate and a tetradentate binding sites linked through a m-xylyl spacer was synthesized as ligand scaffold for preparing homo- and dimetallic complexes, where the two metal ions are bound in two different coordination environments. Site-selective binding of different metal ions is demonstrated. LN3N4 is able to discriminate between CuI and a complementary metal (M′ = CuI, ZnII, FeII, CuII, or GaIII) so that pure heterodimetallic complexes with a general formula [CuIM′(LN3N4)]n+ are synthesized. Reaction of the dicopper(I) complex [CuI 2(LN3N4)]2+ with O2 leads to the formation of two different copper-dioxygen (Cu2O2) intermolecular species (O and TP) between two copper atoms located in the same site from different complex molecules. Taking advantage of this feature, reaction of the heterodimetallic complexes [CuM′(LN3N4)]n+ with O2 at low temperature is used as a tool to determine the final position of the CuI center in the system because only one of the two Cu2O2 species is formed