982 resultados para Mathematical techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Artifact removal from physiological signals is an essential component of the biosignal processing pipeline. The need for powerful and robust methods for this process has become particularly acute as healthcare technology deployment undergoes transition from the current hospital-centric setting toward a wearable and ubiquitous monitoring environment. Currently, determining the relative efficacy and performance of the multiple artifact removal techniques available on real world data can be problematic, due to incomplete information on the uncorrupted desired signal. The majority of techniques are presently evaluated using simulated data, and therefore, the quality of the conclusions is contingent on the fidelity of the model used. Consequently, in the biomedical signal processing community, there is considerable focus on the generation and validation of appropriate signal models for use in artifact suppression. Most approaches rely on mathematical models which capture suitable approximations to the signal dynamics or underlying physiology and, therefore, introduce some uncertainty to subsequent predictions of algorithm performance. This paper describes a more empirical approach to the modeling of the desired signal that we demonstrate for functional brain monitoring tasks which allows for the procurement of a ground truth signal which is highly correlated to a true desired signal that has been contaminated with artifacts. The availability of this ground truth, together with the corrupted signal, can then aid in determining the efficacy of selected artifact removal techniques. A number of commonly implemented artifact removal techniques were evaluated using the described methodology to validate the proposed novel test platform. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chili powder is a globally traded commodity which has been found to be adulterated with Sudan dyes from 2003 onwards. In this study, chili powders were adulterated with varying quantities of Sudan I dye (0.1-5%) and spectra were generated using near infrared reflectance spectroscopy (NIRS) and Raman
spectroscopy (on a spectrometer with a sample compartment modified as part of the study). Chemometrics were applied to the spectral data to produce quantitative and qualitative calibration models and prediction statistics. For the quantitative models coefficients of determination (R2) were found to be
0.891-0.994 depending on which spectral data (NIRS/Raman) was processed, the mathematical algorithm used and the data pre-processing applied. The corresponding values for the root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) were found to be 0.208-0.851%
and 0.141-0.831% respectively, once again depending on the spectral data and the chemometric treatment applied to the data. Indications are that the NIR spectroscopy based models are superior to the models produced from Raman spectral data based on a comparison of the values of the chemometric
parameters. The limit of detection (LOD) based on analysis of 20 blank chili powders against each calibration model gave 0.25% and 0.88% for the NIR and Raman data, respectively. In addition, adopting a qualitative approach with the spectral data and applying PCA or PLS-DA, it was possible to discriminate
between adulterated chili powders from non-adulterated chili powders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematical Program with Complementarity Constraints (MPCC) finds applica- tion in many fields. As the complementarity constraints fail the standard Linear In- dependence Constraint Qualification (LICQ) or the Mangasarian-Fromovitz constraint qualification (MFCQ), at any feasible point, the nonlinear programming theory may not be directly applied to MPCC. However, the MPCC can be reformulated as NLP problem and solved by nonlinear programming techniques. One of them, the Inexact Restoration (IR) approach, performs two independent phases in each iteration - the feasibility and the optimality phases. This work presents two versions of an IR algorithm to solve MPCC. In the feasibility phase two strategies were implemented, depending on the constraints features. One gives more importance to the complementarity constraints, while the other considers the priority of equality and inequality constraints neglecting the complementarity ones. The optimality phase uses the same approach for both algorithm versions. The algorithms were implemented in MATLAB and the test problems are from MACMPEC collection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematical models and statistical analysis are key instruments in soil science scientific research as they can describe and/or predict the current state of a soil system. These tools allow us to explore the behavior of soil related processes and properties as well as to generate new hypotheses for future experimentation. A good model and analysis of soil properties variations, that permit us to extract suitable conclusions and estimating spatially correlated variables at unsampled locations, is clearly dependent on the amount and quality of data and of the robustness techniques and estimators. On the other hand, the quality of data is obviously dependent from a competent data collection procedure and from a capable laboratory analytical work. Following the standard soil sampling protocols available, soil samples should be collected according to key points such as a convenient spatial scale, landscape homogeneity (or non-homogeneity), land color, soil texture, land slope, land solar exposition. Obtaining good quality data from forest soils is predictably expensive as it is labor intensive and demands many manpower and equipment both in field work and in laboratory analysis. Also, the sampling collection scheme that should be used on a data collection procedure in forest field is not simple to design as the sampling strategies chosen are strongly dependent on soil taxonomy. In fact, a sampling grid will not be able to be followed if rocks at the predicted collecting depth are found, or no soil at all is found, or large trees bar the soil collection. Considering this, a proficient design of a soil data sampling campaign in forest field is not always a simple process and sometimes represents a truly huge challenge. In this work, we present some difficulties that have occurred during two experiments on forest soil that were conducted in order to study the spatial variation of some soil physical-chemical properties. Two different sampling protocols were considered for monitoring two types of forest soils located in NW Portugal: umbric regosol and lithosol. Two different equipments for sampling collection were also used: a manual auger and a shovel. Both scenarios were analyzed and the results achieved have allowed us to consider that monitoring forest soil in order to do some mathematical and statistical investigations needs a sampling procedure to data collection compatible to established protocols but a pre-defined grid assumption often fail when the variability of the soil property is not uniform in space. In this case, sampling grid should be conveniently adapted from one part of the landscape to another and this fact should be taken into consideration of a mathematical procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’utilisation des mesures subjectives en épidémiologie s’est intensifiée récemment, notamment avec la volonté de plus en plus affirmée d’intégrer la perception qu’ont les sujets de leur santé dans l’étude des maladies et l’évaluation des interventions. La psychométrie regroupe les méthodes statistiques utilisées pour la construction des questionnaires et l’analyse des données qui en sont issues. Ce travail de thèse avait pour but d’explorer différents problèmes méthodologiques soulevés par l’utilisation des techniques psychométriques en épidémiologie. Trois études empiriques sont présentées et concernent 1/ la phase de validation de l’instrument : l’objectif était de développer, à l’aide de données simulées, un outil de calcul de la taille d’échantillon pour la validation d’échelle en psychiatrie ; 2/ les propriétés mathématiques de la mesure obtenue : l’objectif était de comparer les performances de la différence minimale cliniquement pertinente d’un questionnaire calculée sur des données de cohorte, soit dans le cadre de la théorie classique des tests (CTT), soit dans celui de la théorie de réponse à l’item (IRT) ; 3/ son utilisation dans un schéma longitudinal : l’objectif était de comparer, à l’aide de données simulées, les performances d’une méthode statistique d’analyse de l’évolution longitudinale d’un phénomène subjectif mesuré à l’aide de la CTT ou de l’IRT, en particulier lorsque certains items disponibles pour la mesure différaient à chaque temps. Enfin, l’utilisation de graphes orientés acycliques a permis de discuter, à l’aide des résultats de ces trois études, la notion de biais d’information lors de l’utilisation des mesures subjectives en épidémiologie.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main instrument used in psychological measurement is the self-report questionnaire. One of its major drawbacks however is its susceptibility to response biases. A known strategy to control these biases has been the use of so-called ipsative items. Ipsative items are items that require the respondent to make between-scale comparisons within each item. The selected option determines to which scale the weight of the answer is attributed. Consequently in questionnaires only consisting of ipsative items every respondent is allotted an equal amount, i.e. the total score, that each can distribute differently over the scales. Therefore this type of response format yields data that can be considered compositional from its inception. Methodological oriented psychologists have heavily criticized this type of item format, since the resulting data is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians have kept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate both positions and addresses the similarities and differences between the two data collection methods. The ultimate objective is to formulate a guideline when to use which type of item format. The comparison is based on data obtained with both an ipsative and normative version of three psychological questionnaires, which were administered to 502 first-year students in psychology according to a balanced within-subjects design. Previous research only compared the direct ipsative scale scores with the derived ipsative scale scores. The use of compositional data analysis techniques also enables one to compare derived normative score ratios with direct normative score ratios. The addition of the second comparison not only offers the advantage of a better-balanced research strategy. In principle it also allows for parametric testing in the evaluation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2000 the European Statistical Office published the guidelines for developing the Harmonized European Time Use Surveys system. Under such a unified framework, the first Time Use Survey of national scope was conducted in Spain during 2002– 03. The aim of these surveys is to understand human behavior and the lifestyle of people. Time allocation data are of compositional nature in origin, that is, they are subject to non-negativity and constant-sum constraints. Thus, standard multivariate techniques cannot be directly applied to analyze them. The goal of this work is to identify homogeneous Spanish Autonomous Communities with regard to the typical activity pattern of their respective populations. To this end, fuzzy clustering approach is followed. Rather than the hard partitioning of classical clustering, where objects are allocated to only a single group, fuzzy method identify overlapping groups of objects by allowing them to belong to more than one group. Concretely, the probabilistic fuzzy c-means algorithm is conveniently adapted to deal with the Spanish Time Use Survey microdata. As a result, a map distinguishing Autonomous Communities with similar activity pattern is drawn. Key words: Time use data, Fuzzy clustering; FCM; simplex space; Aitchison distance

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to obtain a high-resolution Pleistocene stratigraphy, eleven continuously cored boreholes, 100 to 220m deep were drilled in the northern part of the Po Plain by Regione Lombardia in the last five years. Quantitative provenance analysis (QPA, Weltje and von Eynatten, 2004) of Pleistocene sands was carried out by using multivariate statistical analysis (principal component analysis, PCA, and similarity analysis) on an integrated data set, including high-resolution bulk petrography and heavy-mineral analyses on Pleistocene sands and of 250 major and minor modern rivers draining the southern flank of the Alps from West to East (Garzanti et al, 2004; 2006). Prior to the onset of major Alpine glaciations, metamorphic and quartzofeldspathic detritus from the Western and Central Alps was carried from the axial belt to the Po basin longitudinally parallel to the SouthAlpine belt by a trunk river (Vezzoli and Garzanti, 2008). This scenario rapidly changed during the marine isotope stage 22 (0.87 Ma), with the onset of the first major Pleistocene glaciation in the Alps (Muttoni et al, 2003). PCA and similarity analysis from core samples show that the longitudinal trunk river at this time was shifted southward by the rapid southward and westward progradation of transverse alluvial river systems fed from the Central and Southern Alps. Sediments were transported southward by braided river systems as well as glacial sediments transported by Alpine valley glaciers invaded the alluvial plain. Kew words: Detrital modes; Modern sands; Provenance; Principal Components Analysis; Similarity, Canberra Distance; palaeodrainage

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methodology used to measure in vitro gas production is reviewed to determine impacts of sources of variation on resultant gas production profiles (GPP). Current methods include measurement of gas production at constant pressure (e.g., use of gas tight syringes), a system that is inexpensive, but may be less sensitive than others thereby affecting its suitability in some situations. Automated systems that measure gas production at constant volume allow pressure to accumulate in the bottle, which is recorded at different times to produce a GPP, and may result in sufficiently high pressure that solubility of evolved gases in the medium is affected, thereby resulting in a recorded volume of gas that is lower than that predicted from stoichiometric calculations. Several other methods measure gas production at constant pressure and volume with either pressure transducers or sensors, and these may be manual, semi-automated or fully automated in operation. In these systems, gas is released as pressure increases, and vented gas is recorded. Agitating the medium does not consistently produce more gas with automated systems, and little or no effect of agitation was observed with manual systems. The apparatus affects GPP, but mathematical manipulation may enable effects of apparatus to be removed. The amount of substrate affects the volume of gas produced, but not rate of gas production, provided there is sufficient buffering capacity in the medium. Systems that use a very small amount of substrate are prone to experimental error in sample weighing. Effect of sample preparation on GPP has been found to be important, but further research is required to determine the optimum preparation that mimics animal chewing. Inoculum is the single largest source of variation in measuring GPP, as rumen fluid is variable and sampling schedules, diets fed to donor animals and ratios of rumen fluid/medium must be selected such that microbial activity is sufficiently high that it does not affect rate and extent of fermentation. Species of donor animal may also cause differences in GPP. End point measures can be mathematically manipulated to account for species differences, but rates of fermentation are not related. Other sources of inocula that have been used include caecal fluid (primarily for investigating hindgut fermentation in monogastrics), effluent from simulated rumen fermentation (e.g., 'Rusitec', which was as variable as rumen fluid), faeces, and frozen or freeze-dried rumen fluid (which were both less active than fresh rumen fluid). Use of mixtures of cell-free enzymes, or pure cultures of bacteria, may be a way of increasing GPP reproducibility, while reducing reliance on surgically modified animals. However, more research is required to develop these inocula. A number of media have been developed which buffer the incubation and provide relevant micro-nutrients to the microorganisms. To date, little research has been completed on relationships between the composition of the medium and measured GPP. However, comparing GPP from media either rich in N or N-free, allows assessment of contributions of N containing compounds in the sample. (c) 2005 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last few years a state-space formulation has been introduced into self-tuning control. This has not only allowed for a wider choice of possible control actions, but has also provided an insight into the theory underlying—and hidden by—that used in the polynomial description. This paper considers many of the self-tuning algorithms, both state-space and polynomial, presently in use, and by starting from first principles develops the observers which are, effectively, used in each case. At any specific time instant the state estimator can be regarded as taking one of two forms. In the first case the most recently available output measurement is excluded, and here an optimal and conditionally stable observer is obtained. In the second case the present output signal is included, and here it is shown that although the observer is once again conditionally stable, it is no longer optimal. This result is of significance, as many of the popular self-tuning controllers lie in the second, rather than first, category.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Techniques for modelling urban microclimates and urban block surfaces temperatures are desired by urban planners and architects for strategic urban designs at the early design stages. This paper introduces a simplified mathematical model for urban simulations (UMsim) including urban surfaces temperatures and microclimates. The nodal network model has been developed by integrating coupled thermal and airflow model. Direct solar radiation, diffuse radiation, reflected radiation, long-wave radiation, heat convection in air and heat transfer in the exterior walls and ground within the complex have been taken into account. The relevant equations have been solved using the finite difference method under the Matlab platform. Comparisons have been conducted between the data produced from the simulation and that from an urban experimental study carried out in a real architectural complex on the campus of Chongqing University, China in July 2005 and January 2006. The results show a satisfactory agreement between the two sets of data. The UMsim can be used to simulate the microclimates, in particular the surface temperatures of urban blocks, therefore it can be used to assess the impact of urban surfaces properties on urban microclimates. The UMsim will be able to produce robust data and images of urban environments for sustainable urban design.