953 resultados para Computations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

If we classify variables in a program into various security levels, then a secure information flow analysis aims to verify statically that information in a program can flow only in ways consistent with the specified security levels. One well-studied approach is to formulate the rules of the secure information flow analysis as a type system. A major trend of recent research focuses on how to accommodate various sophisticated modern language features. However, this approach often leads to overly complicated and restrictive type systems, making them unfit for practical use. Also, problems essential to practical use, such as type inference and error reporting, have received little attention. This dissertation identified and solved major theoretical and practical hurdles to the application of secure information flow. ^ We adopted a minimalist approach to designing our language to ensure a simple lenient type system. We started out with a small simple imperative language and only added features that we deemed most important for practical use. One language feature we addressed is arrays. Due to the various leaking channels associated with array operations, arrays have received complicated and restrictive typing rules in other secure languages. We presented a novel approach for lenient array operations, which lead to simple and lenient typing of arrays. ^ Type inference is necessary because usually a user is only concerned with the security types for input/output variables of a program and would like to have all types for auxiliary variables inferred automatically. We presented a type inference algorithm B and proved its soundness and completeness. Moreover, algorithm B stays close to the program and the type system and therefore facilitates informative error reporting that is generated in a cascading fashion. Algorithm B and error reporting have been implemented and tested. ^ Lastly, we presented a novel framework for developing applications that ensure user information privacy. In this framework, core computations are defined as code modules that involve input/output data from multiple parties. Incrementally, secure flow policies are refined based on feedback from the type checking/inference. Core computations only interact with code modules from involved parties through well-defined interfaces. All code modules are digitally signed to ensure their authenticity and integrity. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research is to establish new optimization methods for pattern recognition and classification of different white blood cells in actual patient data to enhance the process of diagnosis. Beckman-Coulter Corporation supplied flow cytometry data of numerous patients that are used as training sets to exploit the different physiological characteristics of the different samples provided. The methods of Support Vector Machines (SVM) and Artificial Neural Networks (ANN) were used as promising pattern classification techniques to identify different white blood cell samples and provide information to medical doctors in the form of diagnostic references for the specific disease states, leukemia. The obtained results prove that when a neural network classifier is well configured and trained with cross-validation, it can perform better than support vector classifiers alone for this type of data. Furthermore, a new unsupervised learning algorithm---Density based Adaptive Window Clustering algorithm (DAWC) was designed to process large volumes of data for finding location of high data cluster in real-time. It reduces the computational load to ∼O(N) number of computations, and thus making the algorithm more attractive and faster than current hierarchical algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose. The goal of this study is to improve the favorable molecular interactions between starch and PPC by addition of grafting monomers MA and ROM as compatibilizers, which would advance the mechanical properties of starch/PPC composites. ^ Methodology. DFT and semi-empirical methods based calculations were performed on three systems: (a) starch/PPC, (b) starch/PPC-MA, and (c) starch-ROM/PPC. Theoretical computations involved the determination of optimal geometries, binding-energies and vibrational frequencies of the blended polymers. ^ Findings. Calculations performed on five starch/PPC composites revealed hydrogen bond formation as the driving force behind stable composite formation, also confirmed by the negative relative energies of the composites indicating the existence of binding forces between the constituent co-polymers. The interaction between starch and PPC is also confirmed by the computed decrease in stretching CO and OH group frequencies participating in hydrogen bond formation, which agree qualitatively with the experimental values. ^ A three-step mechanism of grafting MA on PPC was proposed to improve the compatibility of PPC with starch. Nine types of 'blends' produced by covalent bond formation between starch and MA-grafted PPC were found to be energetically stable, with blends involving MA grafted at the 'B' and 'C' positions of PPC indicating a binding-energy increase of 6.8 and 6.2 kcal/mol, respectively, as compared to the non-grafted starch/PPC composites. A similar increase in binding-energies was also observed for three types of 'composites' formed by hydrogen bond formation between starch and MA-grafted PPC. ^ Next, grafting of ROM on starch and subsequent blend formation with PPC was studied. All four types of blends formed by the reaction of ROM-grafted starch with PPC were found to be more energetically stable as compared to the starch/PPC composite and starch/PPC-MA composites and blends. A blend of PPC and ROM grafted at the ' a&d12; ' position on amylose exhibited a maximal increase of 17.1 kcal/mol as compared with the starch/PPC-MA blend. ^ Conclusions. ROM was found to be a more effective compatibilizer in improving the favorable interactions between starch and PPC as compared to MA. The ' a&d12; ' position was found to be the most favorable attachment point of ROM to amylose for stable blend formation with PPC.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigated the influence that receiving instruction in two languages, English and Spanish, had on the performance of students enrolled in the International Studies Program (delayed partial immersion model) of Miami Dade County Public Schools on a standardized test in English, the Stanford Achievement Test, eighth edition, for three of its sections, Reading Comprehension, Mathematics Computations, and Mathematics Applications.^ The performance of the selected IS program/Spanish section cohort of students (N = 55) on the SAT Reading Comprehension, Mathematics Computation, and Mathematics Application along four consecutive years was contrasted with that of a control group of comparable students selected within the same feeder pattern where the IS program is implemented (N = 21). The performance of the group was also compared to the cross-sectional achievement patterns of the school's corresponding feeder pattern, region, and district.^ The research model for the study was a variation of the "causal-comparative" or "ex post facto design" sometimes referred to as "prospective". After data were collected from MDCPS, t-tests were performed to compare IS-Spanish students SAT performance for grades 3 to 6 for years 1994 to 1997 to control group, feeder pattern, region and district norms for each year for Reading Comprehension, Mathematics Computation, and Mathematics Applications. Repeated measures ANOVA and Tukey's tests were calculated to compare the mean percentiles of the groups under study and the possible interactions of the different variables. All tests were performed at the 5% significance level.^ From the analyses of the tests it was deduced that the IS group performed significantly better than the control group for all the three measures along the four years. The IS group mean percentiles on the three measures were also significantly higher than those of the feeder pattern, region, and district. The null hypotheses were rejected and it was concluded that receiving instruction in two languages did not negatively affect the performance of IS program students on tests taken in English. It was also concluded that the particular design the IS program enhances the general performance of participant students on Standardized tests.^ The quantitative analyses were coupled with interviews from teachers and administrators of the IS program to gain additional insight about different aspects of the implementation of the program at each particular school. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The northern Antarctic Peninsula is one of the fastest changing regions on Earth. The disintegration of the Larsen-A Ice Shelf in 1995 caused tributary glaciers to adjust by speeding up, surface lowering, and overall increased ice-mass discharge. In this study, we investigate the temporal variation of these changes at the Dinsmoor-Bombardier-Edgeworth glacier system by analyzing dense time series from various spaceborne and airborne Earth observation missions. Precollapse ice shelf conditions and subsequent adjustments through 2014 were covered. Our results show a response of the glacier system some months after the breakup, reaching maximum surface velocities at the glacier front of up to 8.8 m/d in 1999 and a subsequent decrease to ~1.5 m/d in 2014. Using a dense time series of interferometrically derived TanDEM-X digital elevation models and photogrammetric data, an exponential function was fitted for the decrease in surface elevation. Elevation changes in areas below 1000 m a.s.l. amounted to at least 130±15 m130±15 m between 1995 and 2014, with change rates of ~3.15 m/a between 2003 and 2008. Current change rates (2010-2014) are in the range of 1.7 m/a. Mass imbalances were computed with different scenarios of boundary conditions. The most plausible results amount to -40.7±3.9 Gt-40.7±3.9 Gt. The contribution to sea level rise was estimated to be 18.8±1.8 Gt18.8±1.8 Gt, corresponding to a 0.052±0.005 mm0.052±0.005 mm sea level equivalent, for the period 1995-2014. Our analysis and scenario considerations revealed that major uncertainties still exist due to insufficiently accurate ice-thickness information. The second largest uncertainty in the computations was the glacier surface mass balance, which is still poorly known. Our time series analysis facilitates an improved comparison with GRACE data and as input to modeling of glacio-isostatic uplift in this region. The study contributed to a better understanding of how glacier systems adjust to ice shelf disintegration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

R.N.P. and P.J.H. are grateful for funding from an NSERC Discovery Grant. Computations were performed on the GPC supercomputer at the SciNet HPC Consortium. SciNet is funded by the Canada Foundation for Innovation under the auspices of Compute Canada, the Government of Ontario, Ontario Research Fund—Research Excellence and the University of Toronto. Numerical calculations were done using a modified version of the SOPALE (2000) software. The SOPALE modelling code was originally developed by Philippe Fullsack at Dalhousie University with Chris Beaumont and his Geodynamics group.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A class of lifetime distributions which has received considerable attention in modelling and analysis of lifetime data is the class of lifetime distributions with bath-tub shaped failure rate functions because of their extensive applications. The purpose of this thesis was to introduce a new class of bivariate lifetime distributions with bath-tub shaped failure rates (BTFRFs). In this research, first we reviewed univariate lifetime distributions with bath-tub shaped failure rates, and several multivariate extensions of a univariate failure rate function. Then we introduced a new class of bivariate distributions with bath-tub shaped failure rates (hazard gradients). Specifically, the new class of bivariate lifetime distributions were developed using the method of Morgenstern’s method of defining bivariate class of distributions with given marginals. The computer simulations and numerical computations were used to investigate the properties of these distributions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When we study the variables that a ffect survival time, we usually estimate their eff ects by the Cox regression model. In biomedical research, e ffects of the covariates are often modi ed by a biomarker variable. This leads to covariates-biomarker interactions. Here biomarker is an objective measurement of the patient characteristics at baseline. Liu et al. (2015) has built up a local partial likelihood bootstrap model to estimate and test this interaction e ffect of covariates and biomarker, but the R code developed by Liu et al. (2015) can only handle one variable and one interaction term and can not t the model with adjustment to nuisance variables. In this project, we expand the model to allow adjustment to nuisance variables, expand the R code to take any chosen interaction terms, and we set up many parameters for users to customize their research. We also build up an R package called "lplb" to integrate the complex computations into a simple interface. We conduct numerical simulation to show that the new method has excellent fi nite sample properties under both the null and alternative hypothesis. We also applied the method to analyze data from a prostate cancer clinical trial with acid phosphatase (AP) biomarker.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context. The 30 Doradus (30 Dor) region of the Large Magellanic Cloud, also known as the Tarantula nebula, is the nearest starburst region. It contains the richest population of massive stars in the Local Group, and it is thus the best possible laboratory to investigate open questions on the formation and evolution of massive stars. Aims. Using ground-based multi-object optical spectroscopy obtained in the framework of the VLT-FLAMES Tarantula Survey (VFTS), we aim to establish the (projected) rotational velocity distribution for a sample of 216 presumably single O-type stars in 30 Dor. The sample is large enough to obtain statistically significant information and to search for variations among subpopulations - in terms of spectral type, luminosity class, and spatial location - in the field of view. Methods. We measured projected rotational velocities, 3e sin i, by means of a Fourier transform method and a profile fitting method applied to a set of isolated spectral lines. We also used an iterative deconvolution procedure to infer the probability density, P(3e), of the equatorial rotational velocity, 3e. Results. The distribution of 3e sin i shows a two-component structure: a peak around 80 km s1 and a high-velocity tail extending up to 600 km s-1 This structure is also present in the inferred distribution P(3e) with around 80% of the sample having 0 <3e ≤ 300 km s-1 and the other 20% distributed in the high-velocity region. The presence of the low-velocity peak is consistent with what has been found in other studies for late O- and early B-type stars. Conclusions. Most of the stars in our sample rotate with a rate less than 20% of their break-up velocity. For the bulk of the sample, mass loss in a stellar wind and/or envelope expansion is not efficient enough to significantly spin down these stars within the first few Myr of evolution. If massive-star formation results in stars rotating at birth with a large portion of their break-up velocities, an alternative braking mechanism, possibly magnetic fields, is thus required to explain the present-day rotational properties of the O-type stars in 30 Dor. The presence of a sizeable population of fast rotators is compatible with recent population synthesis computations that investigate the influence of binary evolution on the rotation rate of massive stars. Even though we have excluded stars that show significant radial velocity variations, our sample may have remained contaminated by post-interaction binary products. That the highvelocity tail may be populated primarily (and perhaps exclusively) by post-binary interaction products has important implications for the evolutionary origin of systems that produce gamma-ray bursts. © 2013 Author(s).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emerging field of quantum thermodynamics is contributing important results and insights into archetypal many-body problems, including quantum phase transitions. Still, the question whether out-of-equilibrium quantities, such as fluctuations of work, exhibit critical scaling after a sudden quench in a closed system has remained elusive. Here, we take a novel approach to the problem by studying a quench across an impurity quantum critical point. By performing density matrix renormalization group computations on the two-impurity Kondo model, we are able to establish that the irreversible work produced in a quench exhibits finite-size scaling at quantum criticality. This scaling faithfully predicts the equilibrium critical exponents for the crossover length and the order parameter of the model, and, moreover, implies a new exponent for the rescaled irreversible work. By connecting the irreversible work to the two-impurity spin correlation function, our findings can be tested experimentally.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Smartphones have undergone a remarkable evolution over the last few years, from simple calling devices to full fledged computing devices where multiple services and applications run concurrently. Unfortunately, battery capacity increases at much slower pace, resulting as a main bottleneck for Internet connected smartphones. Several software-based techniques have been proposed in the literature for improving the battery life. Most common techniques include data compression, packet aggregation or batch scheduling, offloading partial computations to cloud, switching OFF interfaces (e.g., WiFi or 3G/4G) periodically for short intervals etc. However, there has been no focus on eliminating the energy waste of background applications that extensively utilize smartphone resources such as CPU, memory, GPS, WiFi, 3G/4G data connection etc. In this paper, we propose an Application State Proxy (ASP) that suppresses/stops the applications on smartphones and maintains their presence on any other network device. The applications are resumed/restarted on smartphones only in case of any event, such as a new message arrival. In this paper, we present the key requirements for the ASP service and different possible architectural designs. In short, the ASP concept can significantly improve the battery life of smartphones, by reducing to maximum extent the usage of its resources due to background applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ranking variables according to their relevance to predict an outcome is an important task in biomedicine. For instance, such ranking can be used for selecting a smaller number of genes to then apply other sophisticated experiments only on genes identified as important. A nonparametric method called Quor is designed to provide a confidence value for the order of arbitrary quantiles of different populations using independent samples. This confidence may provide insights about possible differences among groups and yields a ranking of importance for the variables. Computations are efficient and use exact distributions with no need for asymptotic considerations. Experiments with simulated data and with multiple real -omics data sets are performed and they show advantages and disadvantages of the method. Quor has no assumptions but independence of samples, thus it might be a better option when assumptions of other methods cannot be asserted. The software is publicly available on CRAN.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Present work examines numerically the asymmetric behavior of hydrogen/air flame in a micro-channel subjected to a non-uniform wall temperature distribution. A high resolution (with cell size of 25 μm × 25 μm) of two-dimensional transient Navier–Stokes simulation is conducted in the low-Mach number formulation using detailed chemistry evolving 9 chemical species and 21 elementary reactions. Firstly, effects of hydrodynamic and diffusive-thermal instabilities are studied by performing the computations for different Lewis numbers. Then, the effects of preferential diffusion of heat and mass transfer on the asymmetric behavior of the hydrogen flame are analyzed for different inlet velocities and equivalence ratios. Results show that for the flames in micro-channels, interactions between thermal diffusion and molecular diffusion play major role in evolution of a symmetric flame into an asymmetric one. Furthermore, the role of Darrieus–Landau instability found to be minor. It is also found that in symmetric flames, the Lewis number decreases behind the flame front. This is related to the curvature of flame which leads to the inclination of thermal and mass fluxes. The mass diffusion vectors point toward the walls and the thermal diffusion vectors point toward the centerline. Asymmetric flame is observed when the length of flame front is about 1.1–1.15 times of the channel width.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this thesis is to discuss the determination of homological invariants of polynomial ideals. Thereby we consider different coordinate systems and analyze their meaning for the computation of certain invariants. In particular, we provide an algorithm that transforms any ideal into strongly stable position if char k = 0. With a slight modification, this algorithm can also be used to achieve a stable or quasi-stable position. If our field has positive characteristic, the Borel-fixed position is the maximum we can obtain with our method. Further, we present some applications of Pommaret bases, where we focus on how to directly read off invariants from this basis. In the second half of this dissertation we take a closer look at another homological invariant, namely the (absolute) reduction number. It is a known fact that one immediately receives the reduction number from the basis of the generic initial ideal. However, we show that it is not possible to formulate an algorithm – based on analyzing only the leading ideal – that transforms an ideal into a position, which allows us to directly receive this invariant from the leading ideal. So in general we can not read off the reduction number of a Pommaret basis. This result motivates a deeper investigation of which properties a coordinate system must possess so that we can determine the reduction number easily, i.e. by analyzing the leading ideal. This approach leads to the introduction of some generalized versions of the mentioned stable positions, such as the weakly D-stable or weakly D-minimal stable position. The latter represents a coordinate system that allows to determine the reduction number without any further computations. Finally, we introduce the notion of β-maximal position, which provides lots of interesting algebraic properties. In particular, this position is in combination with weakly D-stable sufficient for the weakly D-minimal stable position and so possesses a connection to the reduction number.