880 resultados para Iterative decoding
Resumo:
Two logically distinct and permissive extensions of iterative weak dominance are introduced for games with possibly vector-valued payoffs. The first, iterative partial dominance, builds on an easy-to check condition but may lead to solutions that do not include any (generalized) Nash equilibria. However, the second and intuitively more demanding extension, iterative essential dominance, is shown to be an equilibrium refinement. The latter result includes Moulin’s (1979) classic theorem as a special case when all players’ payoffs are real-valued. Therefore, essential dominance solvability can be a useful solution concept for making sharper predictions in multicriteria games that feature a plethora of equilibria.
Resumo:
Treball de recerca realitzat per un alumne d'ensenyament secundari i guardonat amb un Premi CIRIT per fomentar l'esperit científic del Jovent l'any 2009. La programació al servei de la matemàtica és un programa informàtic fet amb Excel i Visual Basic. Resol equacions de primer grau, equacions de segon grau, sistemes d'equacions lineals de dues equacions i dues incògnites, sistemes d'equacions lineals compatibles determinats de tres equacions i tres incògnites i troba zeros de funcions amb el teorema de Bolzano. En cadascun dels casos, representa les solucions gràficament. Per a això, en el treball s'ha hagut de treballar, en matemàtiques, amb equacions, nombres complexos, la regla de Cramer per a la resolució de sistemes, i buscar la manera de programar un mètode iteratiu pel teorema de Bolzano. En la part gràfica, s'ha resolt com fer taules de valors amb dues i tres variables i treballar amb rectes i plans. Per la part informàtica, s'ha emprat un llenguatge nou per l'alumne i, sobretot, ha calgut saber decidir on posar una determinada instrucció, ja que el fet de variar-ne la posició una sola línea ho pot canviar tot. A més d'això, s'han resolt altres problemes de programació i també s'ha realitzat el disseny de pantalles.
Resumo:
Humans can recognize categories of environmental sounds, including vocalizations produced by humans and animals and the sounds of man-made objects. Most neuroimaging investigations of environmental sound discrimination have studied subjects while consciously perceiving and often explicitly recognizing the stimuli. Consequently, it remains unclear to what extent auditory object processing occurs independently of task demands and consciousness. Studies in animal models have shown that environmental sound discrimination at a neural level persists even in anesthetized preparations, whereas data from anesthetized humans has thus far provided null results. Here, we studied comatose patients as a model of environmental sound discrimination capacities during unconsciousness. We included 19 comatose patients treated with therapeutic hypothermia (TH) during the first 2 days of coma, while recording nineteen-channel electroencephalography (EEG). At the level of each individual patient, we applied a decoding algorithm to quantify the differential EEG responses to human vs. animal vocalizations as well as to sounds of living vocalizations vs. man-made objects. Discrimination between vocalization types was accurate in 11 patients and discrimination between sounds from living and man-made sources in 10 patients. At the group level, the results were significant only for the comparison between vocalization types. These results lay the groundwork for disentangling truly preferential activations in response to auditory categories, and the contribution of awareness to auditory category discrimination.
Resumo:
Metropolitan areas concentrate the main share of population, production and consumption in OECD countries. They are likely to be the most important units for economic, social and environmental analysis as well as for the development of policy strategies. However, one of the main problems that occur when adopting metropolitan areas as units of analysis and policy in European countries is the absence of widely accepted standards for identifying them. This severe problem appeared when we tried to perform comparative research between Spain and Italy using metropolitan areas as units of analysis. The aim of this paper is to identify metropolitan areas in Spain and Italy using similar methodologies. The results allow comparing the metropolitan realities of both countries as well as providing the metropolitan units that can be used in subsequent comparative researches. Two methodologies are proposed: the Cheshire-GEMACA methodology (FUR) and an iterative version of the USA-MSA algorithm, particularly adapted to deal with polycentric metropolitan areas (DMA). Both methods show a good approximation to the metropolitan reality and produce very similar results: 75 FUR and 67 DMA in Spain (75% of total population and employment), and 81 FUR and 86 DMA in Italy (70% of total population and employment).
Resumo:
We consider linear optimization over a nonempty convex semi-algebraic feasible region F. Semidefinite programming is an example. If F is compact, then for almost every linear objective there is a unique optimal solution, lying on a unique \active" manifold, around which F is \partly smooth", and the second-order sufficient conditions hold. Perturbing the objective results in smooth variation of the optimal solution. The active manifold consists, locally, of these perturbed optimal solutions; it is independent of the representation of F, and is eventually identified by a variety of iterative algorithms such as proximal and projected gradient schemes. These results extend to unbounded sets F.
Resumo:
Objectives: Imatinib has been increasingly proposed for therapeutic drug monitoring (TDM), as trough concentrations (Cmin) correlate with response rates in CML patients. This analysis aimed to evaluate the impact of imatinib exposure on optimal molecular response rates in a large European cohort of patients followed by centralized TDM.¦Methods: Sequential PK/PD analysis was performed in NONMEM 7 on 2230 plasma (PK) samples obtained along with molecular response (PD) data from 1299 CML patients. Model-based individual Bayesian estimates of exposure, parameterized as to initial dose adjusted and log-normalized Cmin (log-Cmin) or clearance (CL), were investigated as potential predictors of optimal molecular response, while accounting for time under treatment (stratified at 3 years), gender, CML phase, age, potentially interacting comedication, and TDM frequency. PK/PD analysis used mixed-effect logistic regression (iterative two-stage method) to account for intra-patient correlation.¦Results: In univariate analyses, CL, log-Cmin, time under treatment, TDM frequency, gender (all p<0.01) and CML phase (p=0.02) were significant predictors of the outcome. In multivariate analyses, all but log-Cmin remained significant (p<0.05). Our model estimates a 54.1% probability of optimal molecular response in a female patient with a median CL of 14.4 L/h, increasing by 4.7% with a 35% decrease in CL (percentile 10 of CL distribution), and decreasing by 6% with a 45% increased CL (percentile 90), respectively. Male patients were less likely than female to be in optimal response (odds ratio: 0.62, p<0.001), with an estimated probability of 42.3%.¦Conclusions: Beyond CML phase and time on treatment, expectedly correlated to the outcome, an effect of initial imatinib exposure on the probability of achieving optimal molecular response was confirmed in field-conditions by this multivariate analysis. Interestingly, male patients had a higher risk of suboptimal response, which might not exclusively derive from their 18.5% higher CL, but also from reported lower adherence to the treatment. A prospective longitudinal study would be desirable to confirm the clinical importance of identified covariates and to exclude biases possibly affecting this observational survey.
Resumo:
Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating for camera saturation which takes into account the variable activity in the field of view, i.e. time-dependent dead-time effects. The algorithm presented here accomplishes this task.
Resumo:
Actualmente existen muchas aplicaciones paralelas/distribuidas en las cuales SPMD es el paradigma más usado. Obtener un buen rendimiento en una aplicación paralela de este tipo es uno de los principales desafíos dada la gran cantidad de aplicaciones existentes. Este objetivo no es fácil de resolver ya que existe una gran variedad de configuraciones de hardware, y también la naturaleza de los problemas pueden ser variados así como la forma de implementarlos. En consecuencia, si no se considera adecuadamente la combinación "software/hardware" pueden aparecer problemas inherentes a una aplicación iterativa sin una jerarquía de control definida de acuerdo a este paradigma. En SPMD todos los procesos ejecutan el mismo código pero computan una sección diferente de los datos de entrada. Una solución a un posible problema del rendimiento es proponer una estrategia de balance de carga para homogeneizar el cómputo entre los diferentes procesos. En este trabajo analizamos el benchmark CG con cargas heterogéneas con la finalidad de detectar los posibles problemas de rendimiento en una aplicación real. Un factor que determina el rendimiento en esta aplicación es la cantidad de elementos nonzero contenida en la sección de matriz asignada a cada proceso. Determinamos que es posible definir una estrategia de balance de carga que puede ser implementada de forma dinámica y demostramos experimentalmente que el rendimiento de la aplicación puede mejorarse de forma significativa con dicha estrategia.
Resumo:
The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally, as clues in investigation and crime reconstruction processes.
Resumo:
n this paper the iterative MSFV method is extended to include the sequential implicit simulation of time dependent problems involving the solution of a system of pressure-saturation equations. To control numerical errors in simulation results, an error estimate, based on the residual of the MSFV approximate pressure field, is introduced. In the initial time steps in simulation iterations are employed until a specified accuracy in pressure is achieved. This initial solution is then used to improve the localization assumption at later time steps. Additional iterations in pressure solution are employed only when the pressure residual becomes larger than a specified threshold value. Efficiency of the strategy and the error control criteria are numerically investigated. This paper also shows that it is possible to derive an a-priori estimate and control based on the allowed pressure-equation residual to guarantee the desired accuracy in saturation calculation.
Resumo:
Hidden Markov models (HMMs) are probabilistic models that are well adapted to many tasks in bioinformatics, for example, for predicting the occurrence of specific motifs in biological sequences. MAMOT is a command-line program for Unix-like operating systems, including MacOS X, that we developed to allow scientists to apply HMMs more easily in their research. One can define the architecture and initial parameters of the model in a text file and then use MAMOT for parameter optimization on example data, decoding (like predicting motif occurrence in sequences) and the production of stochastic sequences generated according to the probabilistic model. Two examples for which models are provided are coiled-coil domains in protein sequences and protein binding sites in DNA. A wealth of useful features include the use of pseudocounts, state tying and fixing of selected parameters in learning, and the inclusion of prior probabilities in decoding. AVAILABILITY: MAMOT is implemented in C++, and is distributed under the GNU General Public Licence (GPL). The software, documentation, and example model files can be found at http://bcf.isb-sib.ch/mamot
Resumo:
In the case of such a very special building project, the crucial stake for sustainable development is the fact that space systems are extreme cases of environmental constraints. In- deed, they constitute an interesting model as an analogy can be made between Martian utmost conditions and some of the possible extreme one's that Earth might soon face. The didactic ob- jective of the project is to use the context of a building on Mars to teach an approach which raises the students awareness to design and plan all steps of a building in a sustainable way, i.e. build, with the available resources, living spaces that satisfy human needs and leave as intact as possible the external environment. The paper presents the approach and the feedback of this student project, more specifically ENAC Learning Unit", which involved 17 students from envi- ronmental, civil engineering and architecture sections from EPFL. All the same, it involved pro- fessors from all three domains, as well as aerospace and Mars specialists, which gave seminars during the course of the semester. The students were separated in groups, and the project con- sisted of two phases: 1) analysis of the context and resources, 2) project design and critic. Both organisational, technical and pedagogical aspects of the experience are presented. The outcome was very positive, with students experiencing for their first time multidisciplinary work and the iterative process of design under multiple constraints.
Resumo:
AbstractDigitalization gives to the Internet the power by allowing several virtual representations of reality, including that of identity. We leave an increasingly digital footprint in cyberspace and this situation puts our identity at high risks. Privacy is a right and fundamental social value that could play a key role as a medium to secure digital identities. Identity functionality is increasingly delivered as sets of services, rather than monolithic applications. So, an identity layer in which identity and privacy management services are loosely coupled, publicly hosted and available to on-demand calls could be more realistic and an acceptable situation. Identity and privacy should be interoperable and distributed through the adoption of service-orientation and implementation based on open standards (technical interoperability). Ihe objective of this project is to provide a way to implement interoperable user-centric digital identity-related privacy to respond to the need of distributed nature of federated identity systems. It is recognized that technical initiatives, emerging standards and protocols are not enough to guarantee resolution for the concerns surrounding a multi-facets and complex issue of identity and privacy. For this reason they should be apprehended within a global perspective through an integrated and a multidisciplinary approach. The approach dictates that privacy law, policies, regulations and technologies are to be crafted together from the start, rather than attaching it to digital identity after the fact. Thus, we draw Digital Identity-Related Privacy (DigldeRP) requirements from global, domestic and business-specific privacy policies. The requirements take shape of business interoperability. We suggest a layered implementation framework (DigldeRP framework) in accordance to model-driven architecture (MDA) approach that would help organizations' security team to turn business interoperability into technical interoperability in the form of a set of services that could accommodate Service-Oriented Architecture (SOA): Privacy-as-a-set-of- services (PaaSS) system. DigldeRP Framework will serve as a basis for vital understanding between business management and technical managers on digital identity related privacy initiatives. The layered DigldeRP framework presents five practical layers as an ordered sequence as a basis of DigldeRP project roadmap, however, in practice, there is an iterative process to assure that each layer supports effectively and enforces requirements of the adjacent ones. Each layer is composed by a set of blocks, which determine a roadmap that security team could follow to successfully implement PaaSS. Several blocks' descriptions are based on OMG SoaML modeling language and BPMN processes description. We identified, designed and implemented seven services that form PaaSS and described their consumption. PaaSS Java QEE project), WSDL, and XSD codes are given and explained.
Resumo:
SUMMARY: Large sets of data, such as expression profiles from many samples, require analytic tools to reduce their complexity. The Iterative Signature Algorithm (ISA) is a biclustering algorithm. It was designed to decompose a large set of data into so-called 'modules'. In the context of gene expression data, these modules consist of subsets of genes that exhibit a coherent expression profile only over a subset of microarray experiments. Genes and arrays may be attributed to multiple modules and the level of required coherence can be varied resulting in different 'resolutions' of the modular mapping. In this short note, we introduce two BioConductor software packages written in GNU R: The isa2 package includes an optimized implementation of the ISA and the eisa package provides a convenient interface to run the ISA, visualize its output and put the biclusters into biological context. Potential users of these packages are all R and BioConductor users dealing with tabular (e.g. gene expression) data. AVAILABILITY: http://www.unil.ch/cbg/ISA CONTACT: sven.bergmann@unil.ch
Resumo:
El projecte consisteix en una aplicació web que actua com un magatzem central d'arxius, on els usuaris es poden connectar via https des de qualsevol localització, independentment del grau de seguretat que implementi la seva estació, i enviar arxius encriptats o sense encriptar amb les garanties màximes de seguretat i confidencialitat. Els fitxers restarien emmagatzemats a un BBDD on posteriorment, només usuaris acreditats amb una clau AES d'ecriptació/desencriptació els podrien recuperar, també via HTTPS, i procedir a desencriptar-los a nivell local.