877 resultados para 190202 Computer Gaming and Animation
Resumo:
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
Neural stem cells (NSCs) are early precursors of neuronal and glial cells. NSCs are capable of generating identical progeny through virtually unlimited numbers of cell divisions (cell proliferation), producing daughter cells committed to differentiation. Nuclear factor kappa B (NF-kappaB) is an inducible, ubiquitous transcription factor also expressed in neurones, glia and neural stem cells. Recently, several pieces of evidence have been provided for a central role of NF-kappaB in NSC proliferation control. Here, we propose a novel mathematical model for NF-kappaB-driven proliferation of NSCs. We have been able to reconstruct the molecular pathway of activation and inactivation of NF-kappaB and its influence on cell proliferation by a system of nonlinear ordinary differential equations. Then we use a combination of analytical and numerical techniques to study the model dynamics. The results obtained are illustrated by computer simulations and are, in general, in accordance with biological findings reported by several independent laboratories. The model is able to both explain and predict experimental data. Understanding of proliferation mechanisms in NSCs may provide a novel outlook in both potential use in therapeutic approaches, and basic research as well.
Resumo:
We present a general approach based on nonequilibrium thermodynamics for bridging the gap between a well-defined microscopic model and the macroscopic rheology of particle-stabilised interfaces. Our approach is illustrated by starting with a microscopic model of hard ellipsoids confined to a planar surface, which is intended to simply represent a particle-stabilised fluid–fluid interface. More complex microscopic models can be readily handled using the methods outlined in this paper. From the aforementioned microscopic starting point, we obtain the macroscopic, constitutive equations using a combination of systematic coarse-graining, computer experiments and Hamiltonian dynamics. Exemplary numerical solutions of the constitutive equations are given for a variety of experimentally relevant flow situations to explore the rheological behaviour of our model. In particular, we calculate the shear and dilatational moduli of the interface over a wide range of surface coverages, ranging from the dilute isotropic regime, to the concentrated nematic regime.
Resumo:
Customers will not continue to pay for a service if it is perceived to be of poor quality, and/or of no value. With a paradigm shift towards business dependence on service orientated IS solutions [1], it is critical that alignment exists between service definition, delivery, and customer expectation, businesses are to ensure customer satisfaction. Services, and micro-service development, offer businesses a flexible structure for solution innovation, however, constant changes in technology, business and societal expectations means an iterative analysis solution is required to i) determine whether provider services adequately meet customer segment needs and expectations, and ii) to help guide business service innovation and development. In this paper, by incorporating multiple models, we propose a series of steps to help identify and prioritise service gaps. Moreover, the authors propose the Dual Semiosis Analysis Model, i.e. a tool that highlights where within the symbiotic customer / provider semiosis process, requirements misinterpretation, and/or service provision deficiencies occur. This paper offers the reader a powerful customer-centric tool, designed to help business managers highlight both what services are critical to customer quality perception, and where future innovation
Resumo:
Health monitoring technologies such as Body Area Network (BAN) systems has gathered a lot of attention during the past few years. Largely encouraged by the rapid increase in the cost of healthcare services and driven by the latest technological advances in Micro-Electro-Mechanical Systems (MEMS) and wireless communications. BAN technology comprises of a network of body worn or implanted sensors that continuously capture and measure the vital parameters such as heart rate, blood pressure, glucose levels and movement. The collected data must be transferred to a local base station in order to be further processed. Thus, wireless connectivity plays a vital role in such systems. However, wireless connectivity comes at a cost of increased power usage, mainly due to the high energy consumption during data transmission. Unfortunately, battery-operated devices are unable to operate for ultra-long duration of time and are expected to be recharged or replaced once they run out of energy. This is not a simple task especially in the case of implanted devices such as pacemakers. Therefore, prolonging the network lifetime in BAN systems is one of the greatest challenges. In order to achieve this goal, BAN systems take advantage of low-power in-body and on-body/off-body wireless communication technologies. This paper compares some of the existing and emerging low-power communication protocols that can potentially be employed to support the rapid development and deployment of BAN systems.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In many data sets from clinical studies there are patients insusceptible to the occurrence of the event of interest. Survival models which ignore this fact are generally inadequate. The main goal of this paper is to describe an application of the generalized additive models for location, scale, and shape (GAMLSS) framework to the fitting of long-term survival models. in this work the number of competing causes of the event of interest follows the negative binomial distribution. In this way, some well known models found in the literature are characterized as particular cases of our proposal. The model is conveniently parameterized in terms of the cured fraction, which is then linked to covariates. We explore the use of the gamlss package in R as a powerful tool for inference in long-term survival models. The procedure is illustrated with a numerical example. (C) 2009 Elsevier Ireland Ltd. All rights reserved.
Resumo:
In 2006 the Route load balancing algorithm was proposed and compared to other techniques aiming at optimizing the process allocation in grid environments. This algorithm schedules tasks of parallel applications considering computer neighborhoods (where the distance is defined by the network latency). Route presents good results for large environments, although there are cases where neighbors do not have an enough computational capacity nor communication system capable of serving the application. In those situations the Route migrates tasks until they stabilize in a grid area with enough resources. This migration may take long time what reduces the overall performance. In order to improve such stabilization time, this paper proposes RouteGA (Route with Genetic Algorithm support) which considers historical information on parallel application behavior and also the computer capacities and load to optimize the scheduling. This information is extracted by using monitors and summarized in a knowledge base used to quantify the occupation of tasks. Afterwards, such information is used to parameterize a genetic algorithm responsible for optimizing the task allocation. Results confirm that RouteGA outperforms the load balancing carried out by the original Route, which had previously outperformed others scheduling algorithms from literature.
Resumo:
In this work, we deal with the problem of packing (orthogonally and without overlapping) identical rectangles in a rectangle. This problem appears in different logistics settings, such as the loading of boxes on pallets, the arrangements of pallets in trucks and the stowing of cargo in ships. We present a recursive partitioning approach combining improved versions of a recursive five-block heuristic and an L-approach for packing rectangles into larger rectangles and L-shaped pieces. The combined approach is able to rapidly find the optimal solutions of all instances of the pallet loading problem sets Cover I and II (more than 50 000 instances). It is also effective for solving the instances of problem set Cover III (almost 100 000 instances) and practical examples of a woodpulp stowage problem, if compared to other methods from the literature. Some theoretical results are also discussed and, based on them, efficient computer implementations are introduced. The computer implementation and the data sets are available for benchmarking purposes. Journal of the Operational Research Society (2010) 61, 306-320. doi: 10.1057/jors.2008.141 Published online 4 February 2009
Resumo:
In this article, we give an asymptotic formula of order n(-1/2), where n is the sample size, for the skewness of the distributions of the maximum likelihood estimates of the parameters in exponencial family nonlinear models. We generalize the result by Cordeiro and Cordeiro ( 2001). The formula is given in matrix notation and is very suitable for computer implementation and to obtain closed form expressions for a great variety of models. Some special cases and two applications are discussed.
Resumo:
Cloud computing innebär att datorkraft och IT-resurser i form av servrar, lagring och lokala nätverk görs tillgängliga via internet, cloud computing har ökat snabbt i användning de senaste åren. Möjligheterna med molntjänster har lett till nya utmaningar avseende tillit till molntjänster genom att öppna för helt nya förhållanden mellan leverantör och kund. I dagsläget så är tillit till molntjänsterna ett stort problem för kunder såsom mikroföretag samt små och medelstora företag. Vårt examensarbete syftar till att beskriva mikroföretags samt små och medelstora företags tillit och misstro till molntjänster, samt identifiera vilka kriterier som bidrar till detta. Resultatet visar att det är svårt att tydligt definiera tillit och misstro hos företag, då deras situationer och verksamheter skiljer sig. Tack vare vår litteraturstudie i kombination med våra intervjuer har vi skapat oss en bra bild av vilka kriterier som bidrar med tillit och misstro till molntjänster. Baserat på vårt arbete har vi sammanställt en konceptuell modell som beskriver tillit och misstro till molntjänster.
Resumo:
Parkinson’s disease (PD) is an increasing neurological disorder in an aging society. The motor and non-motor symptoms of PD advance with the disease progression and occur in varying frequency and duration. In order to affirm the full extent of a patient’s condition, repeated assessments are necessary to adjust medical prescription. In clinical studies, symptoms are assessed using the unified Parkinson’s disease rating scale (UPDRS). On one hand, the subjective rating using UPDRS relies on clinical expertise. On the other hand, it requires the physical presence of patients in clinics which implies high logistical costs. Another limitation of clinical assessment is that the observation in hospital may not accurately represent a patient’s situation at home. For such reasons, the practical frequency of tracking PD symptoms may under-represent the true time scale of PD fluctuations and may result in an overall inaccurate assessment. Current technologies for at-home PD treatment are based on data-driven approaches for which the interpretation and reproduction of results are problematic. The overall objective of this thesis is to develop and evaluate unobtrusive computer methods for enabling remote monitoring of patients with PD. It investigates first-principle data-driven model based novel signal and image processing techniques for extraction of clinically useful information from audio recordings of speech (in texts read aloud) and video recordings of gait and finger-tapping motor examinations. The aim is to map between PD symptoms severities estimated using novel computer methods and the clinical ratings based on UPDRS part-III (motor examination). A web-based test battery system consisting of self-assessment of symptoms and motor function tests was previously constructed for a touch screen mobile device. A comprehensive speech framework has been developed for this device to analyze text-dependent running speech by: (1) extracting novel signal features that are able to represent PD deficits in each individual component of the speech system, (2) mapping between clinical ratings and feature estimates of speech symptom severity, and (3) classifying between UPDRS part-III severity levels using speech features and statistical machine learning tools. A novel speech processing method called cepstral separation difference showed stronger ability to classify between speech symptom severities as compared to existing features of PD speech. In the case of finger tapping, the recorded videos of rapid finger tapping examination were processed using a novel computer-vision (CV) algorithm that extracts symptom information from video-based tapping signals using motion analysis of the index-finger which incorporates a face detection module for signal calibration. This algorithm was able to discriminate between UPDRS part III severity levels of finger tapping with high classification rates. Further analysis was performed on novel CV based gait features constructed using a standard human model to discriminate between a healthy gait and a Parkinsonian gait. The findings of this study suggest that the symptom severity levels in PD can be discriminated with high accuracies by involving a combination of first-principle (features) and data-driven (classification) approaches. The processing of audio and video recordings on one hand allows remote monitoring of speech, gait and finger-tapping examinations by the clinical staff. On the other hand, the first-principles approach eases the understanding of symptom estimates for clinicians. We have demonstrated that the selected features of speech, gait and finger tapping were able to discriminate between symptom severity levels, as well as, between healthy controls and PD patients with high classification rates. The findings support suitability of these methods to be used as decision support tools in the context of PD assessment.
Resumo:
The national railway administrations in Scandinavia, Germany, and Austria mainly resort to manual inspections to control vegetation growth along railway embankments. Manually inspecting railways is slow and time consuming. A more worrying aspect concerns the fact that human observers are often unable to estimate the true cover of vegetation on railway embankments. Further human observers often tend to disagree with each other when more than one observer is engaged for inspection. Lack of proper techniques to identify the true cover of vegetation even result in the excess usage of herbicides; seriously harming the environment and threating the ecology. Hence work in this study has investigated aspects relevant to human variationand agreement to be able to report better inspection routines. This was studied by mainly carrying out two separate yet relevant investigations.First, thirteen observers were separately asked to estimate the vegetation cover in nine imagesacquired (in nadir view) over the railway tracks. All such estimates were compared relatively and an analysis of variance resulted in a significant difference on the observers’ cover estimates (p<0.05). Bearing in difference between the observers, a second follow-up field-study on the railway tracks was initiated and properly investigated. Two railway segments (strata) representingdifferent levels of vegetationwere carefully selected. Five sample plots (each covering an area of one-by-one meter) were randomizedfrom each stratumalong the rails from the aforementioned segments and ten images were acquired in nadir view. Further three observers (with knowledge in the railway maintenance domain) were separately asked to estimate the plant cover by visually examining theplots. Again an analysis of variance resulted in a significant difference on the observers’ cover estimates (p<0.05) confirming the result from the first investigation.The differences in observations are compared against a computer vision algorithm which detects the "true" cover of vegetation in a given image. The true cover is defined as the amount of greenish pixels in each image as detected by the computer vision algorithm. Results achieved through comparison strongly indicate that inconsistency is prevalent among the estimates reported by the observers. Hence, an automated approach reporting the use of computer vision is suggested, thus transferring the manual inspections into objective monitored inspections