922 resultados para binary logic
Resumo:
Conditional mutagenesis using Cre recombinase expressed from tissue specific promoters facilitates analyses of gene function and cell lineage tracing. Here, we describe two novel dual-promoter-driven conditional mutagenesis systems designed for greater accuracy and optimal efficiency of recombination. Co-Driver employs a recombinase cascade of Dre and Dre-respondent Cre, which processes loxP-flanked alleles only when both recombinases are expressed in a predetermined temporal sequence. This unique property makes Co-Driver ideal for sequential lineage tracing studies aimed at unraveling the relationships between cellular precursors and mature cell types. Co-InCre was designed for highly efficient intersectional conditional transgenesis. It relies on highly active trans-splicing inteins and promoters with simultaneous transcriptional activity to reconstitute Cre recombinase from two inactive precursor fragments. By generating native Cre, Co-InCre attains recombination rates that exceed all other binary SSR systems evaluated in this study. Both Co-Driver and Co-InCre significantly extend the utility of existing Cre-responsive alleles.
Resumo:
The logic PJ is a probabilistic logic defined by adding (noniterated) probability operators to the basic justification logic J. In this paper we establish upper and lower bounds for the complexity of the derivability problem in the logic PJ. The main result of the paper is that the complexity of the derivability problem in PJ remains the same as the complexity of the derivability problem in the underlying logic J, which is π[p/2] -complete. This implies that the probability operators do not increase the complexity of the logic, although they arguably enrich the expressiveness of the language.
Resumo:
We present a probabilistic justification logic, PPJ, to study rational belief, degrees of belief and justifications. We establish soundness and completeness for PPJ and show that its satisfiability problem is decidable. In the last part we use PPJ to provide a solution to the lottery paradox.
Resumo:
Monte Carlo simulation has been conducted to investigate parameter estimation and hypothesis testing in some well known adaptive randomization procedures. The four urn models studied are Randomized Play-the-Winner (RPW), Randomized Pôlya Urn (RPU), Birth and Death Urn with Immigration (BDUI), and Drop-the-Loses Urn (DL). Two sequential estimation methods, the sequential maximum likelihood estimation (SMLE) and the doubly adaptive biased coin design (DABC), are simulated at three optimal allocation targets that minimize the expected number of failures under the assumption of constant variance of simple difference (RSIHR), relative risk (ORR), and odds ratio (OOR) respectively. Log likelihood ratio test and three Wald-type tests (simple difference, log of relative risk, log of odds ratio) are compared in different adaptive procedures. ^ Simulation results indicates that although RPW is slightly better in assigning more patients to the superior treatment, the DL method is considerably less variable and the test statistics have better normality. When compared with SMLE, DABC has slightly higher overall response rate with lower variance, but has larger bias and variance in parameter estimation. Additionally, the test statistics in SMLE have better normality and lower type I error rate, and the power of hypothesis testing is more comparable with the equal randomization. Usually, RSIHR has the highest power among the 3 optimal allocation ratios. However, the ORR allocation has better power and lower type I error rate when the log of relative risk is the test statistics. The number of expected failures in ORR is smaller than RSIHR. It is also shown that the simple difference of response rates has the worst normality among all 4 test statistics. The power of hypothesis test is always inflated when simple difference is used. On the other hand, the normality of the log likelihood ratio test statistics is robust against the change of adaptive randomization procedures. ^
Resumo:
Logistic regression is one of the most important tools in the analysis of epidemiological and clinical data. Such data often contain missing values for one or more variables. Common practice is to eliminate all individuals for whom any information is missing. This deletion approach does not make efficient use of available information and often introduces bias.^ Two methods were developed to estimate logistic regression coefficients for mixed dichotomous and continuous covariates including partially observed binary covariates. The data were assumed missing at random (MAR). One method (PD) used predictive distribution as weight to calculate the average of the logistic regressions performing on all possible values of missing observations, and the second method (RS) used a variant of resampling technique. Additional seven methods were compared with these two approaches in a simulation study. They are: (1) Analysis based on only the complete cases, (2) Substituting the mean of the observed values for the missing value, (3) An imputation technique based on the proportions of observed data, (4) Regressing the partially observed covariates on the remaining continuous covariates, (5) Regressing the partially observed covariates on the remaining continuous covariates conditional on response variable, (6) Regressing the partially observed covariates on the remaining continuous covariates and response variable, and (7) EM algorithm. Both proposed methods showed smaller standard errors (s.e.) for the coefficient involving the partially observed covariate and for the other coefficients as well. However, both methods, especially PD, are computationally demanding; thus for analysis of large data sets with partially observed covariates, further refinement of these approaches is needed. ^
Resumo:
The Laredo Epidemiology Project is a study of the patterns of degenerative disease, particularly cancer, in the families of Laredo, Texas. The genealogical history of Laredo was reconstructed by the grouping of 350,000 individual church and civil vital event records into multi-generational families, with record linkage based on matching names. Mortality data from death records are mapped onto these pedigrees for analysis. This dissertation describes the construction of the data base and the logic upon which decisions were based. ^
Resumo:
In this paper, a numerical study is made of simple bi-periodic binary diffraction gratings for solar cell applications. The gratings consist of hexagonal arrays of elliptical towers and wells etched directly into the solar cell substrate. The gratings are applied to two distinct solar cell technologies: a quantum dot intermediate band solar cell (QD-IBSC) and a crystalline silicon solar cell (SSC). In each case, the expected photocurrent increase due to the presence of the grating is calculated assuming AM1.5D illumination. For each technology, the grating period, well/tower depth and well/tower radii are optimised to maximise the photocurrent. The optimum parameters are presented. Results are presented for QD-IBSCs with a range of quantum dot layers and for SSCs with a range of thicknesses. For the QD-IBSC, it is found that the optimised grating leads to an absorption enhancement above that calculated for an ideally Lambertian scatterer for cells with less than 70 quantum dot layers. In a QD-IBSC with 50 quantum dot layers equipped with the optimum grating, the weak intermediate band to conduction band transition absorbs roughly half the photons in the corresponding sub-range of the AM1.5D spectrum. For the SSC, it is found that the optimised grating leads to an absorption enhancement above that calculated for an ideally Lambertian scatterer for cells with thicknesses of 10 ?m or greater. A 20um thick SSC equipped with the optimised grating leads to an absorption enhancement above that of a 200um thick SSC equipped with a planar back reflector.
Resumo:
After more than a decade of development work and hopes, the usage of mobile Internet has finally taken off. Now, we are witnessing the first signs of evidence of what might become the explosion of mobile content and applications that will be shaping the (mobile) Internet of the future. Similar to the wired Internet, search will become very relevant for the usage of mobile Internet. Current research on mobile search has applied a limited set of methodologies and has also generated a narrow outcome of meaningful results. This article covers new ground, exploring the use and visions of mobile search with a users' interview-based qualitative study. Its main conclusion builds upon the hypothesis that mobile search is sensitive to a mobile logic different than today's one. First, (advanced) users ask for accessing with their mobile devices the entire Internet, rather than subsections of it. Second, success is based on new added-value applications that exploit unique mobile functionalities. The authors interpret that such mobile logic involves fundamentally the use of personalised and context-based services.
Resumo:
In this paper, a new countermeasure against power and electromagnetic (EM) Side Channel Attacks (SCA) on FPGA implemented cryptographic algorithms is proposed. This structure mainly focuses on a critical vulnerability, Early Evaluation, also known as Early Propagation Effect (EPE), which exists in most conventional SCA-hardened DPL (Dual-rail with Precharge Logic) solutions. The main merit of this proposal is that the EPE can be effectively prevented by using a synchronized non regular precharge network, which maintains identical routing between the original and mirror parts, where costs and design complexity compared with previous EPE-resistant countermeasures are reduced, while security level is not sacrificed. Another advantage for our Precharge Absorbed(PA) - DPL method is that its Dual-Core style (independent architecture for true and false parts) could be generated using partial reconfiguration. This helps to get a dynamic security protection with better energy planning. That means system only keeps the true part which fulfills the normal en/decryption task in low security level, and reconfigures the false parts once high security level is required. A relatively limited clock speed is a compromise, since signal propagation is restricted to a portion of the clock period. In this paper, we explain the principles of PA-DPL and provide the guidelines to design this structure. We experimentally validate our methods in a minimized AES co-processor on Xilinx Virtex-5 board using electromagnetic (EM) attacks.
Resumo:
Publicación de la Sede del Consejo Consultor de Castilla y León en Zamora en la revista de arquitectura IA&B (Mumbai). El proyecto de Zamora se centra en el diálogo entre una pieza cristina de vidrio y el grueso muro de piedra perimetral. Estas dos fachadas entran en relación gracias a un patio perimetral que tensa el contacto entre entre el vidrio y la piedra. Se hace especial mención de la estricta precisión y racionalidad del proyecto. La publicación contiene textos, dibujos planimétricos, fotografías y materiales del proceso de investigación en el proyecto (croquis y fotografías de maquetas).
Resumo:
Thin polymer films are increasingly used in advanced technological applications. The use of these films as coatings is often limited by their lack of stability due to their wettability properties on the substrates
Resumo:
We propose an analysis for detecting procedures and goals that are deterministic (i.e., that produce at most one solution at most once),or predicates whose clause tests are mutually exclusive (which implies that at most one of their clauses will succeed) even if they are not deterministic. The analysis takes advantage of the pruning operator in order to improve the detection of mutual exclusion and determinacy. It also supports arithmetic equations and disequations, as well as equations and disequations on terms,for which we give a complete satisfiability testing algorithm, w.r.t. available type information. Information about determinacy can be used for program debugging and optimization, resource consumption and granularity control, abstraction carrying code, etc. We have implemented the analysis and integrated it in the CiaoPP system, which also infers automatically the mode and type information that our analysis takes as input. Experiments performed on this implementation show that the analysis is fairly accurate and efficient.
Resumo:
Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.
Resumo:
Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slow-down, or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also,a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with medium to large parallel execution overheads.