947 resultados para iterative determinant maximization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The timing of puberty is highly variable. We carried out a genome-wide association study for age at menarche in 4,714 women and report an association in LIN28B on chromosome 6 (rs314276, minor allele frequency (MAF) = 0.33, P = 1.5 x 10(-8)). In independent replication studies in 16,373 women, each major allele was associated with 0.12 years earlier menarche (95% CI = 0.08-0.16; P = 2.8 x 10(-10); combined P = 3.6 x 10(-16)). This allele was also associated with earlier breast development in girls (P = 0.001; N = 4,271); earlier voice breaking (P = 0.006, N = 1,026) and more advanced pubic hair development in boys (P = 0.01; N = 4,588); a faster tempo of height growth in girls (P = 0.00008; N = 4,271) and boys (P = 0.03; N = 4,588); and shorter adult height in women (P = 3.6 x 10(-7); N = 17,274) and men (P = 0.006; N = 9,840) in keeping with earlier growth cessation. These studies identify variation in LIN28B, a potent and specific regulator of microRNA processing, as the first genetic determinant regulating the timing of human pubertal growth and development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: To investigate the hemodynamic effects of L-canavanine (an inhibitor of inducible, but not of constitutive, nitric oxide synthase) in endotoxic shock. DESIGN: Controlled, randomized, experimental study. SETTING: Animal laboratory. SUBJECTS: Wistar rats. INTERVENTIONS: Rats were anesthetized with pentobarbital, and hemodynamically monitored. One hour after an intravenous challenge with 5 mg/kg of Escherichia coli endotoxin, the rats were randomized to receive a continuous infusion of either L-canavanine (20 mg/kg/hr; n = 8) or vehicle only (isotonic saline, n = 11). In all animals, the infusion was given over 5 hrs at a rate of 2 mL/kg/hr. These experiments were repeated in additional rats challenged with isotonic saline instead of endotoxin (sham experiments). MEASUREMENTS AND MAIN RESULTS: Arterial blood pressure, heart rate, thermodilution cardiac output, central venous pressure, mean systemic filling pressure, urine output, arterial blood gases, blood lactate concentration, and hematocrit were measured. In sham experiments, hemodynamic stability was maintained throughout and L-canavanine had no detectable effect. Animals challenged with endotoxin and not treated with L-canavanine developed progressive hypotension and low cardiac output. After 6 hrs of endotoxemia, both central venous pressure and mean systemic filling pressure were significantly below their baseline values, indicating relative hypovolemia as the main determinant of reduced cardiac output. In endotoxemic animals treated with L-canavanine, hypotension was less marked, while cardiac output, central venous pressure, and mean systemic filling pressure were maintained throughout the experiment. L-canavanine had no effect on the time-course of hematocrit. L-canavanine significantly increased urine output and reduced the severity of lactic acidosis. CONCLUSIONS: Six hours after an endotoxin challenge in rats, low cardiac output develops, which appears to be primarily related to relative hypovolemia. L-canavanine, a selective inhibitor of the inducible nitric oxide synthase, increases the mean systemic filling pressure, thereby improving venous return, under these conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We characterize the capacity-achieving input covariance for multi-antenna channels known instantaneously at the receiver and in distribution at the transmitter. Our characterization, valid for arbitrary numbers of antennas, encompasses both the eigenvectors and the eigenvalues. The eigenvectors are found for zero-mean channels with arbitrary fading profiles and a wide range of correlation and keyhole structures. For the eigenvalues, in turn, we present necessary and sufficient conditions as well as an iterative algorithm that exhibits remarkable properties: universal applicability, robustness and rapid convergence. In addition, we identify channel structures for which an isotropic input achieves capacity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We design powerful low-density parity-check (LDPC) codes with iterative decoding for the block-fading channel. We first study the case of maximum-likelihood decoding, and show that the design criterion is rather straightforward. Since optimal constructions for maximum-likelihood decoding do not performwell under iterative decoding, we introduce a new family of full-diversity LDPC codes that exhibit near-outage-limit performance under iterative decoding for all block-lengths. This family competes favorably with multiplexed parallel turbo codes for nonergodic channels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This workshop paper states that fostering active student participation both in face-to-face lectures / seminars and outside the classroom (personal and group study at home, the library, etc.) requires a certain level of teacher-led inquiry. The paper presents a set of strategies drawn from real practice in higher education with teacher-led inquiry ingredients that promote active learning. Thesepractices highlight the role of the syllabus, the importance of iterative learning designs, explicit teacher-led inquiry, and the implications of the context, sustainability and practitioners’ creativity. The strategies discussed in this paper can serve as input to the workshop as real cases that need to be represented in design and supported in enactment (with and without technologies).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a new approach for tonic identification in Indian art music and present a proposal for acomplete iterative system for the same. Our method splits the task of tonic pitch identification into two stages. In the first stage, which is applicable to both vocal and instrumental music, we perform a multi-pitch analysis of the audio signal to identify the tonic pitch-class. Multi-pitch analysisallows us to take advantage of the drone sound, which constantlyreinforces the tonic. In the second stage we estimate the octave in which the tonic of the singer lies and is thusneeded only for the vocal performances. We analyse the predominant melody sung by the lead performer in order to establish the tonic octave. Both stages are individually evaluated on a sizable music collection and are shown toobtain a good accuracy. We also discuss the types of errors made by the method.Further, we present a proposal for a system that aims to incrementally utilize all the available data, both audio and metadata in order to identify the tonic pitch. It produces a tonic estimate and a confidence value, and is iterative in nature. At each iteration, more data is fed into the systemuntil the confidence value for the identified tonic is above a defined threshold. Rather than obtain high overall accuracy for our complete database, ultimately our goal is to develop a system which obtains very high accuracy on a subset of the database with maximum confidence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show how to build full-diversity product codes under both iterative encoding and decoding over non-ergodic channels, in presence of block erasure and block fading. The concept of a rootcheck or a root subcode is introduced by generalizing the same principle recently invented for low-density parity-check codes. We also describe some channel related graphical properties of the new family of product codes, a familyreferred to as root product codes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For single-user MIMO communication with uncoded and coded QAM signals, we propose bit and power loading schemes that rely only on channel distribution information at the transmitter. To that end, we develop the relationship between the average bit error probability at the output of a ZF linear receiver and the bit rates and powers allocated at the transmitter. This relationship, and the fact that a ZF receiver decouples the MIMO parallel channels, allow leveraging bit loading algorithms already existing in the literature. We solve dual bit rate maximization and power minimization problems and present performance resultsthat illustrate the gains of the proposed scheme with respect toa non-optimized transmission.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper derives approximations allowing the estimation of outage probability for standard irregular LDPC codes and full-diversity Root-LDPC codes used over nonergodic block-fading channels. Two separate approaches are discussed: a numerical approximation, obtained by curve fitting, for both code ensembles, and an analytical approximation for Root-LDPC codes, obtained under the assumption that the slope of the iterative threshold curve of a given code ensemble matches the slope of the outage capacity curve in the high-SNR regime.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Silver Code (SilC) was originally discovered in [1–4] for 2×2 multiple-input multiple-output (MIMO) transmission. It has non-vanishing minimum determinant 1/7, slightly lower than Golden code, but is fast-decodable, i.e., it allows reduced-complexity maximum likelihood decoding [5–7]. In this paper, we present a multidimensional trellis-coded modulation scheme for MIMO systems [11] based on set partitioning of the Silver Code, named Silver Space-Time Trellis Coded Modulation (SST-TCM). This lattice set partitioning is designed specifically to increase the minimum determinant. The branches of the outer trellis code are labeled with these partitions. Viterbi algorithm is applied for trellis decoding, while the branch metrics are computed by using a sphere-decoding algorithm. It is shown that the proposed SST-TCM performs very closely to the Golden Space-Time Trellis Coded Modulation (GST-TCM) scheme, yetwith a much reduced decoding complexity thanks to its fast-decoding property.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le travail d'un(e) expert(e) en science forensique exige que ce dernier (cette dernière) prenne une série de décisions. Ces décisions sont difficiles parce qu'elles doivent être prises dans l'inévitable présence d'incertitude, dans le contexte unique des circonstances qui entourent la décision, et, parfois, parce qu'elles sont complexes suite à de nombreuse variables aléatoires et dépendantes les unes des autres. Etant donné que ces décisions peuvent aboutir à des conséquences sérieuses dans l'administration de la justice, la prise de décisions en science forensique devrait être soutenue par un cadre robuste qui fait des inférences en présence d'incertitudes et des décisions sur la base de ces inférences. L'objectif de cette thèse est de répondre à ce besoin en présentant un cadre théorique pour faire des choix rationnels dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. L'inférence et la théorie de la décision bayésienne satisfont les conditions nécessaires pour un tel cadre théorique. Pour atteindre son objectif, cette thèse consiste de trois propositions, recommandant l'utilisation (1) de la théorie de la décision, (2) des réseaux bayésiens, et (3) des réseaux bayésiens de décision pour gérer des problèmes d'inférence et de décision forensiques. Les résultats présentent un cadre uniforme et cohérent pour faire des inférences et des décisions en science forensique qui utilise les concepts théoriques ci-dessus. Ils décrivent comment organiser chaque type de problème en le décomposant dans ses différents éléments, et comment trouver le meilleur plan d'action en faisant la distinction entre des problèmes de décision en une étape et des problèmes de décision en deux étapes et en y appliquant le principe de la maximisation de l'utilité espérée. Pour illustrer l'application de ce cadre à des problèmes rencontrés par les experts dans un laboratoire de science forensique, des études de cas théoriques appliquent la théorie de la décision, les réseaux bayésiens et les réseaux bayésiens de décision à une sélection de différents types de problèmes d'inférence et de décision impliquant différentes catégories de traces. Deux études du problème des deux traces illustrent comment la construction de réseaux bayésiens permet de gérer des problèmes d'inférence complexes, et ainsi surmonter l'obstacle de la complexité qui peut être présent dans des problèmes de décision. Trois études-une sur ce qu'il faut conclure d'une recherche dans une banque de données qui fournit exactement une correspondance, une sur quel génotype il faut rechercher dans une banque de données sur la base des observations faites sur des résultats de profilage d'ADN, et une sur s'il faut soumettre une trace digitale à un processus qui compare la trace avec des empreintes de sources potentielles-expliquent l'application de la théorie de la décision et des réseaux bayésiens de décision à chacune de ces décisions. Les résultats des études des cas théoriques soutiennent les trois propositions avancées dans cette thèse. Ainsi, cette thèse présente un cadre uniforme pour organiser et trouver le plan d'action le plus rationnel dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. Le cadre proposé est un outil interactif et exploratoire qui permet de mieux comprendre un problème de décision afin que cette compréhension puisse aboutir à des choix qui sont mieux informés. - Forensic science casework involves making a sériés of choices. The difficulty in making these choices lies in the inévitable presence of uncertainty, the unique context of circumstances surrounding each décision and, in some cases, the complexity due to numerous, interrelated random variables. Given that these décisions can lead to serious conséquences in the admin-istration of justice, forensic décision making should be supported by a robust framework that makes inferences under uncertainty and décisions based on these inferences. The objective of this thesis is to respond to this need by presenting a framework for making rational choices in décision problems encountered by scientists in forensic science laboratories. Bayesian inference and décision theory meets the requirements for such a framework. To attain its objective, this thesis consists of three propositions, advocating the use of (1) décision theory, (2) Bayesian networks, and (3) influence diagrams for handling forensic inference and décision problems. The results present a uniform and coherent framework for making inferences and décisions in forensic science using the above theoretical concepts. They describe how to organize each type of problem by breaking it down into its différent elements, and how to find the most rational course of action by distinguishing between one-stage and two-stage décision problems and applying the principle of expected utility maximization. To illustrate the framework's application to the problems encountered by scientists in forensic science laboratories, theoretical case studies apply décision theory, Bayesian net-works and influence diagrams to a selection of différent types of inference and décision problems dealing with différent catégories of trace evidence. Two studies of the two-trace problem illustrate how the construction of Bayesian networks can handle complex inference problems, and thus overcome the hurdle of complexity that can be present in décision prob-lems. Three studies-one on what to conclude when a database search provides exactly one hit, one on what genotype to search for in a database based on the observations made on DNA typing results, and one on whether to submit a fingermark to the process of comparing it with prints of its potential sources-explain the application of décision theory and influ¬ence diagrams to each of these décisions. The results of the theoretical case studies support the thesis's three propositions. Hence, this thesis présents a uniform framework for organizing and finding the most rational course of action in décision problems encountered by scientists in forensic science laboratories. The proposed framework is an interactive and exploratory tool for better understanding a décision problem so that this understanding may lead to better informed choices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article challenges the notion of economic rationality as a criterion for explaining ethnic boundary maintenance. It offers an ethnographic analysis of inter-ethnic relations in the context of games (cockfights and game-fishing contests) in the island of Raiatea (French Polynesia). Although all players engage in the same basic gambling practices, money is differentially scaled and mobilized by the Tahitian and Chinese participants. Building on recent pragmatic approaches to rationality, it is shown that the players' rationalities differ not from the point of view of economic maximization, but only in so far as they participate in social relations at different scales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the interplay of preferences and market productivities on parenting, and show the preferences, when identified, provide a better explanation of caring decisions than has, so far, been demonstrated in the literature. We qualify the standard finding the parental education in a key determinant of care by showing important interaction effects with marital homogamy. We find that homogamy has opposite effects on child care and couple specialization for high and low educated parents. Identification has been made possible by a unique couple-based time diary study for Denmark

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Des dels seus inicis, l’ésser humà ha tingut una sèrie de necessitats. Ha requerit d’aliment per sobreviure, d’un sostre on refugiar-se, de roba per vestirse, etc. Conforme han passat els anys, ha anat generant tècniques per a millorar la seva estància al planeta, des del descobriment del foc per escalfarse fins al desenvolupament de màquines complexes. És per això, que ha anat introduint invents a la seva vida que han cobert les necessitats sorgides al llarg del temps.El treball que desenvolupem a continuació es centra en un seguit d’aparells que han sorgit al llarg del segle XX i que han revolucionat la nostra vida. Concretament, els set aparells escollits com a objecte del nostre estudi són: la nevera, la rentadora, l’automòbil, la televisió, el reproductor d’àudio portàtil, el telèfon mòbil i l’ordinador.L’aparició d’electrodomèstics, com la nevera o la rentadora, no només ens ha permès dedicar menys esforç a les tasques domèstiques, sinó també estalviarnos temps a l’hora de realitzar-les. Aquest fet, que ara potser ni tan sols considerem, va ser del tot determinant a l’època que van sorgir, ja que va permetre alliberar mà d’obra domèstica promovent la inserció al mercat laboral d’un sector de la població com el del sexe femení.Per la seva banda, la introducció de l’automòbil també ha estat un gran avenç. Aquest, ha revolucionat la manera de desplaçar-nos, proporcionant-nos una autonomia fins aleshores impensable.Aquest mateix patró segueixen la resta d’altres aparells escollits; tots van provocar un canvi profund a la societat amb la seva aparició i, encara avui, segueixen mantenint un paper destacat al nostre dia a dia. La importància que tenen a les nostres vides ha estat la motivació que ens ha conduït a dur a terme un treball que els tingués com a protagonistes.El nostre treball consta de dues parts:A la primera, de caràcter teòric, ens hem proposat estudiar la història d’aquests aparells, és a dir, hem volgut donar resposta a una sèrie de preguntes com: quan van aparèixer, quins van ser els seus creadors i quines transformacions han anat experimentant al llarg del temps, entre d’altres.A la segona part, més pràctica, hem realitzat un treball de camp (les característiques del qual expliquem detalladament a l’apartat següent) amb l’objectiu d’analitzar la rellevància que tenen aquests aparells a la societat actual. És a dir, hem volgut saber quins aparells, dels escollits, són els més valorats per la població avui dia i quins són els que menys. Pensem que l’ús que els enquestats fan d’aquests béns serà un criteri clau a l’hora de puntuarlos, encara que no l’únic. Hem realitzat aquest anàlisi diferenciant els individus per franges d’edat i sexes. A més, també hem treballat altres qüestions comserien, entre d’altres, si el seu ús s’entén com una necessitat o va encaminat cap a l’oci; quant temps triguen els usuaris a renovar-los, quin és el motiu, etc. En relació amb aquesta segona part, hem formulat un seguit d’hipòtesis amb l’objectiu de contrastar-les posteriorment amb els resultats de les enquestes.Les hipòtesis són:- Avui dia, els individus donen més importància a aquells aparells que són més moderns, (entenent com a tals, aquells que han sorgit més recentment i estan lligats a les noves tecnologies i la innovació contínua), com poden ser l’ordinador i el telèfon mòbil, que els electrodomèstics més tradicionals com serien la nevera o la rentadora. Tot això, a més, considerant que pel que fa als primers, entenem que part del seu ús està destinat a l’oci; mentre que pels segons el seu ús està motivat per una necessitat, com seria l’alimentació en el cas de la nevera.- Pel que fa a les franges d’edat, les valoracions mostraran diferències importants en els diversos aparells. Entre la franja més jove (de 16 a 25 anys) i la de més edat (més de 60 anys), les valoracions seran més extremes, gairebé oposades.- Pel que fa als dos sexes, les valoracions que donin als aparells seran molt semblants.Hem considerat que per a dur a terme un bon anàlisi, el més adequat és presentar la informació de cada aparell per separat. És per això que cada bé constitueix un apartat independent que consta d’una primera part referent a la seva història, i una segona referent als resultats obtinguts de les enquestes.A continuació, hem procedit a ajuntar els resultats extrets dels diferents aparells i a presentant-los de forma conjunta per a comparar-los. Així, hem pogut contrastar les nostres hipòtesis inicials i hem raonat possibles causes que justificarien dels resultats obtinguts.