886 resultados para computer based experiments
Resumo:
In this paper we present a solution to the problem of action and gesture recognition using sparse representations. The dictionary is modelled as a simple concatenation of features computed for each action or gesture class from the training data, and test data is classified by finding sparse representation of the test video features over this dictionary. Our method does not impose any explicit training procedure on the dictionary. We experiment our model with two kinds of features, by projecting (i) Gait Energy Images (GEIs) and (ii) Motion-descriptors, to a lower dimension using Random projection. Experiments have shown 100% recognition rate on standard datasets and are compared to the results obtained with widely used SVM classifier.
Resumo:
Despite major advances in the study of glioma, the quantitative links between intra-tumor molecular/cellular properties, clinically observable properties such as morphology, and critical tumor behaviors such as growth and invasiveness remain unclear, hampering more effective coupling of tumor physical characteristics with implications for prognosis and therapy. Although molecular biology, histopathology, and radiological imaging are employed in this endeavor, studies are severely challenged by the multitude of different physical scales involved in tumor growth, i.e., from molecular nanoscale to cell microscale and finally to tissue centimeter scale. Consequently, it is often difficult to determine the underlying dynamics across dimensions. New techniques are needed to tackle these issues. Here, we address this multi-scalar problem by employing a novel predictive three-dimensional mathematical and computational model based on first-principle equations (conservation laws of physics) that describe mathematically the diffusion of cell substrates and other processes determining tumor mass growth and invasion. The model uses conserved variables to represent known determinants of glioma behavior, e.g., cell density and oxygen concentration, as well as biological functional relationships and parameters linking phenomena at different scales whose specific forms and values are hypothesized and calculated based on in vitro and in vivo experiments and from histopathology of tissue specimens from human gliomas. This model enables correlation of glioma morphology to tumor growth by quantifying interdependence of tumor mass on the microenvironment (e.g., hypoxia, tissue disruption) and on the cellular phenotypes (e.g., mitosis and apoptosis rates, cell adhesion strength). Once functional relationships between variables and associated parameter values have been informed, e.g., from histopathology or intra-operative analysis, this model can be used for disease diagnosis/prognosis, hypothesis testing, and to guide surgery and therapy. In particular, this tool identifies and quantifies the effects of vascularization and other cell-scale glioma morphological characteristics as predictors of tumor-scale growth and invasion.
Resumo:
Despite major advances in the study of glioma, the quantitative links between intra-tumor molecular/cellular properties, clinically observable properties such as morphology, and critical tumor behaviors such as growth and invasiveness remain unclear, hampering more effective coupling of tumor physical characteristics with implications for prognosis and therapy. Although molecular biology, histopathology, and radiological imaging are employed in this endeavor, studies are severely challenged by the multitude of different physical scales involved in tumor growth, i.e., from molecular nanoscale to cell microscale and finally to tissue centimeter scale. Consequently, it is often difficult to determine the underlying dynamics across dimensions. New techniques are needed to tackle these issues. Here, we address this multi-scalar problem by employing a novel predictive three-dimensional mathematical and computational model based on first-principle equations (conservation laws of physics) that describe mathematically the diffusion of cell substrates and other processes determining tumor mass growth and invasion. The model uses conserved variables to represent known determinants of glioma behavior, e.g., cell density and oxygen concentration, as well as biological functional relationships and parameters linking phenomena at different scales whose specific forms and values are hypothesized and calculated based on in vitro and in vivo experiments and from histopathology of tissue specimens from human gliomas. This model enables correlation of glioma morphology to tumor growth by quantifying interdependence of tumor mass on the microenvironment (e.g., hypoxia, tissue disruption) and on the cellular phenotypes (e.g., mitosis and apoptosis rates, cell adhesion strength). Once functional relationships between variables and associated parameter values have been informed, e.g., from histopathology or intra-operative analysis, this model can be used for disease diagnosis/prognosis, hypothesis testing, and to guide surgery and therapy. In particular, this tool identifies and quantifies the effects of vascularization and other cell-scale glioma morphological characteristics as predictors of tumor-scale growth and invasion.
Resumo:
This study deals with indoor positioning using GSM radio, which has the distinct advantage of wide coverage over other wireless technologies. In particular, we focus on passive localization systems that are able to achieve high localization accuracy without any prior knowledge of the indoor environment or the tracking device radio settings. In order to overcome these challenges, newly proposed localization algorithms based on the exploitation of the received signal strength (RSS) are proposed. We explore the effects of non-line-of-sight communication links, opening and closing of doors, and human mobility on RSS measurements and localization accuracy. We have implemented the proposed algorithms on top of software defined radio systems and carried out detailed empirical indoor experiments. The performance results show that the proposed solutions are accurate with average localization errors between 2.4 and 3.2 meters.
Resumo:
We present an experimental study of the CO electro-oxidation on Pt(100)-(1 × 1) electrodes employing electrochemical methods in combination with in situ scanning tunneling microscopy (STM) and shell-isolated nanoparticle enhanced Raman spectroscopy (SHINERS). We discussed the nature and stability of the active sites in the preignition region in the presence of dissolved CO (COb) and monitored substrate structure changes during the COb electro-oxidation process. We corroborated that the electro-oxidation kinetics is determined decisively by the history of CO adlayer formation. A new mechanism was proposed for Pt(100) electrode deactivation in the preignition region after excursion of electrode potential to COb ignition region. We believe that this mechanism takes place on Pt surfaces independently on their crystallographic orientation.
Resumo:
In this paper, reconstruction of three-dimensional (3D) patient-specific models of a hip joint from two-dimensional (2D) calibrated X-ray images is addressed. Existing 2D-3D reconstruction techniques usually reconstruct a patient-specific model of a single anatomical structure without considering the relationship to its neighboring structures. Thus, when those techniques would be applied to reconstruction of patient-specific models of a hip joint, the reconstructed models may penetrate each other due to narrowness of the hip joint space and hence do not represent a true hip joint of the patient. To address this problem we propose a novel 2D-3D reconstruction framework using an articulated statistical shape model (aSSM). Different from previous work on constructing an aSSM, where the joint posture is modeled as articulation in a training set via statistical analysis, here it is modeled as a parametrized rotation of the femur around the joint center. The exact rotation of the hip joint as well as the patient-specific models of the joint structures, i.e., the proximal femur and the pelvis, are then estimated by optimally fitting the aSSM to a limited number of calibrated X-ray images. Taking models segmented from CT data as the ground truth, we conducted validation experiments on both plastic and cadaveric bones. Qualitatively, the experimental results demonstrated that the proposed 2D-3D reconstruction framework preserved the hip joint structure and no model penetration was found. Quantitatively, average reconstruction errors of 1.9 mm and 1.1 mm were found for the pelvis and the proximal femur, respectively.
Resumo:
Surgical robots have been proposed ex vivo to drill precise holes in the temporal bone for minimally invasive cochlear implantation. The main risk of the procedure is damage of the facial nerve due to mechanical interaction or due to temperature elevation during the drilling process. To evaluate the thermal risk of the drilling process, a simplified model is proposed which aims to enable an assessment of risk posed to the facial nerve for a given set of constant process parameters for different mastoid bone densities. The model uses the bone density distribution along the drilling trajectory in the mastoid bone to calculate a time dependent heat production function at the tip of the drill bit. Using a time dependent moving point source Green's function, the heat equation can be solved at a certain point in space so that the resulting temperatures can be calculated over time. The model was calibrated and initially verified with in vivo temperature data. The data was collected in minimally invasive robotic drilling of 12 holes in four different sheep. The sheep were anesthetized and the temperature elevations were measured with a thermocouple which was inserted in a previously drilled hole next to the planned drilling trajectory. Bone density distributions were extracted from pre-operative CT data by averaging Hounsfield values over the drill bit diameter. Post-operative [Formula: see text]CT data was used to verify the drilling accuracy of the trajectories. The comparison of measured and calculated temperatures shows a very good match for both heating and cooling phases. The average prediction error of the maximum temperature was less than 0.7 °C and the average root mean square error was approximately 0.5 °C. To analyze potential thermal damage, the model was used to calculate temperature profiles and cumulative equivalent minutes at 43 °C at a minimal distance to the facial nerve. For the selected drilling parameters, temperature elevation profiles and cumulative equivalent minutes suggest that thermal elevation of this minimally invasive cochlear implantation surgery may pose a risk to the facial nerve, especially in sclerotic or high density mastoid bones. Optimized drilling parameters need to be evaluated and the model could be used for future risk evaluation.
Resumo:
Periacetabular Osteotomy (PAO) is a joint preserving surgical intervention intended to increase femoral head coverage and thereby to improve stability in young patients with hip dysplasia. Previously, we developed a CT-based, computer-assisted program for PAO diagnosis and planning, which allows for quantifying the 3D acetabular morphology with parameters such as acetabular version, inclination, lateral center edge (LCE) angle and femoral head coverage ratio (CO). In order to verify the hypothesis that our morphology-based planning strategy can improve biomechanical characteristics of dysplastic hips, we developed a 3D finite element model based on patient-specific geometry to predict cartilage contact stress change before and after morphology-based planning. Our experimental results demonstrated that the morphology-based planning strategy could reduce cartilage contact pressures and at the same time increase contact areas. In conclusion, our computer-assisted system is an efficient tool for PAO planning.
Resumo:
Development of homology modeling methods will remain an area of active research. These methods aim to develop and model increasingly accurate three-dimensional structures of yet uncrystallized therapeutically relevant proteins e.g. Class A G-Protein Coupled Receptors. Incorporating protein flexibility is one way to achieve this goal. Here, I will discuss the enhancement and validation of the ligand-steered modeling, originally developed by Dr. Claudio Cavasotto, via cross modeling of the newly crystallized GPCR structures. This method uses known ligands and known experimental information to optimize relevant protein binding sites by incorporating protein flexibility. The ligand-steered models were able to model, reasonably reproduce binding sites and the co-crystallized native ligand poses of the β2 adrenergic and Adenosine 2A receptors using a single template structure. They also performed better than the choice of template, and crude models in a small scale high-throughput docking experiments and compound selectivity studies. Next, the application of this method to develop high-quality homology models of Cannabinoid Receptor 2, an emerging non-psychotic pain management target, is discussed. These models were validated by their ability to rationalize structure activity relationship data of two, inverse agonist and agonist, series of compounds. The method was also applied to improve the virtual screening performance of the β2 adrenergic crystal structure by optimizing the binding site using β2 specific compounds. These results show the feasibility of optimizing only the pharmacologically relevant protein binding sites and applicability to structure-based drug design projects.
Resumo:
Culture and mesocosm experiments are often carried out under high initial nutrient concentrations, yielding high biomass concentrations that in turn often lead to a substantial build-up of DOM. In such experiments, DOM can reach concentrations much higher than typically observed in the open ocean. To the extent that DOM includes organic acids and bases, it will contribute to the alkalinity of the seawater contained in the experimental device. Our analysis suggests that whenever substantial amounts of DOM are produced during the experiment, standard computer programmes used to compute CO2 fugacity can underestimate true fCO2 significantly when the computation is based on AT and CT. Unless the effect of DOM-alkalinity can be accounted for, this might lead to significant errors in the interpretation of the system under consideration with respect to the experimentally applied CO2 perturbation. Errors in the inferred fCO2 can misguide the development of parameterisations used in simulations with global carbon cycle models in future CO2-scenarios. Over determination of the CO2-system in experimental ocean acidification studies is proposed to safeguard against possibly large errors in estimated fCO2.
Resumo:
Proof carrying code (PCC) is a general is originally a roof in ñrst-order logic of certain vermethodology for certifying that the execution of an un- ification onditions and the checking process involves trusted mobile code is safe. The baste idea is that the ensuring that the certifícate is indeed a valid ñrst-order code supplier attaches a certifícate to the mobile code proof. which the consumer checks in order to ensure that the The main practical difñculty of PCC techniques is in code is indeed safe. The potential benefit is that the generating safety certiñeates which at the same time: i) consumer's task is reduced from the level of proving to allow expressing interesting safety properties, ii) can be the level of checking. Recently, the abstract interpre- generated automatically and, iii) are easy and efficient tation techniques developed, in logic programming have to check. In [1], the abstract interpretation techniques been proposed as a basis for PCC. This extended ab- [5] developed in logic programming1 are proposed as stract reports on experiments which illustrate several is- a basis for PCC. They offer a number of advantages sues involved in abstract interpretation-based certifica- for dealing with the aforementioned issues. In particution. First, we describe the implementation of our sys- lar, the xpressiveness of existing abstract domains will tem in the context of CiaoPP: the preprocessor of the be implicitly available in abstract interpretation-based Ciao multi-paradigm programming system. Then, by code certification to deñne a wide range of safety propermeans of some experiments, we show how code certifi- ties. Furthermore, the approach inherits the automation catión is aided in the implementation of the framework. and inference power of the abstract interpretation en- Finally, we discuss the application of our method within gines used in (Constraint) Logic Programming, (C)LP. the área, of pervasive systems
Resumo:
Proof carrying code is a general methodology for certifying that the execution of an untrusted mobile code is safe, according to a predefined safety policy. The basic idea is that the code supplier attaches a certifícate (or proof) to the mobile code which, then, the consumer checks in order to ensure that the code is indeed safe. The potential benefit is that the consumer's task is reduced from the level of proving to the level of checking, a much simpler task. Recently, the abstract interpretation techniques developed in logic programming have been proposed as a basis for proof carrying code [1]. To this end, the certifícate is generated from an abstract interpretation-based proof of safety. Intuitively, the verification condition is extracted from a set of assertions guaranteeing safety and the answer table generated during the analysis. Given this information, it is relatively simple and fast to verify that the code does meet this proof and so its execution is safe. This extended abstract reports on experiments which illustrate several issues involved in abstract interpretation-based code certification. First, we describe the implementation of our system in the context of CiaoPP: the preprocessor of the Ciao multi-paradigm (constraint) logic programming system. Then, by means of some experiments, we show how code certification is aided in the implementation of the framework. Finally, we discuss the application of our method within the área of pervasive systems which may lack the necessary computing resources to verify safety on their own. We herein illustrate the relevance of the information inferred by existing cost analysis to control resource usage in this context. Moreover, since the (rather complex) analysis phase is replaced by a simpler, efficient checking process at the code consumer side, we believe that our abstract interpretation-based approach to proof-carrying code becomes practically applicable to this kind of systems.
Resumo:
The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.
Resumo:
We present two approaches to cluster dialogue-based information obtained by the speech understanding module and the dialogue manager of a spoken dialogue system. The purpose is to estimate a language model related to each cluster, and use them to dynamically modify the model of the speech recognizer at each dialogue turn. In the first approach we build the cluster tree using local decisions based on a Maximum Normalized Mutual Information criterion. In the second one we take global decisions, based on the optimization of the global perplexity of the combination of the cluster-related LMs. Our experiments show a relative reduction of the word error rate of 15.17%, which helps to improve the performance of the understanding and the dialogue manager modules.