978 resultados para Normalization constraint
Resumo:
Much research studies how individuals cope with disease threat by blaming out-groups and protecting the in-group. The model of collective symbolic coping (CSC) describes four stages by which representations of a threatening event are elaborated in the mass media: awareness, divergence, convergence, and normalization. We used the CSC model to predict when symbolic in-group protection (othering) would occur in the case of the avian influenza (AI) outbreak. Two studies documented CSC stages and showed that othering occurred during the divergence stage, characterized by an uncertain symbolic environment. Study 1 analysed media coverage of AI over time, documenting CSC stages of awareness and divergence. In Study 2, a two-wave repeated cross-sectional survey was conducted just after the divergence stage and a year later. Othering was measured by the number of foreign countries erroneously ticked by participants as having human victims. Individual differences in germ aversion and social dominance orientation interacted to predict othering during the divergence stage but not a year later. Implications for research on CSC and symbolic in-group protection strategies resulting from disease threat are discussed.
Resumo:
Type 1 diabetes (T1D) is rarely a component of primary immune dysregulation disorders. We report two cases in which T1D was associated with thrombocytopenia. The first patient, a 13-year-old boy, presented with immune thrombocytopenia (ITP), thyroiditis, and, 3 wk later, T1D. Because of severe thrombocytopenia resistant to immunoglobulins, high-dose steroids, and cyclosporine treatment, anti-cluster of differentiation (CD20) therapy was introduced, with consequent normalization of thrombocytes and weaning off of steroids. Three and 5 months after anti-CD20 therapy, levothyroxin and insulin therapy, respectively, were stopped. Ten months after stopping insulin treatment, normal C-peptide and hemoglobin A1c (HbA1c) levels and markedly reduced anti-glutamic acid decarboxylase (GAD) antibodies were measured. A second anti-CD20 trial for relapse of ITP was initiated 2 yr after the first trial. Anti-GAD antibody levels decreased again, but HbA1c stayed elevated and glucose monitoring showed elevated postprandial glycemia, demanding insulin therapy. To our knowledge, this is the first case in which insulin treatment could be interrupted for 28 months after anti-CD20 treatment. In patient two, thrombocytopenia followed a diagnosis of T1D 6 yr previously. Treatment with anti-CD20 led to normalization of thrombocytes, but no effect on T1D was observed. Concerning the origin of the boys' conditions, several primary immune dysregulation disorders were considered. Thrombocytopenia associated with T1D is unusual and could represent a new entity. The diabetes manifestation in patient one was probably triggered by corticosteroid treatment; regardless, anti-CD20 therapy appeared to be efficacious early in the course of T1D, but not long after the initial diagnosis of T1D, as shown for patient two.
Resumo:
The bias of αβ T cells for MHC ligands has been proposed to be intrinsic to the T-cell receptor (TCR). Equally, the CD4 and CD8 coreceptors contribute to ligand restriction by colocalizing Lck with the TCR when MHC ligands are engaged. To determine the importance of intrinsic ligand bias, the germ-line TCR complementarity determining regions were extensively diversified in vivo. We show that engagement with MHC ligands during thymocyte selection and peripheral T-cell activation imposes remarkably little constraint over TCR structure. Such versatility is more consistent with an opportunist, rather than a predetermined, mode of interface formation. This hypothesis was experimentally confirmed by expressing a hybrid TCR containing TCR-γ chain germ-line complementarity determining regions, which engaged efficiently with MHC ligands.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
Peer-reviewed
Resumo:
Extensible Dependency Grammar (XDG; Debusmann, 2007) is a flexible, modular dependency grammarframework in which sentence analyses consist of multigraphs and processing takes the form of constraint satisfaction. This paper shows how XDGlends itself to grammar-driven machine translation and introduces the machinery necessary for synchronous XDG. Since the approach relies on a shared semantics, it resembles interlingua MT.It differs in that there are no separateanalysis and generation phases. Rather, translation consists of the simultaneousanalysis and generation of a single source-target sentence.
Resumo:
Many engineering problems that can be formulatedas constrained optimization problems result in solutionsgiven by a waterfilling structure; the classical example is thecapacity-achieving solution for a frequency-selective channel.For simple waterfilling solutions with a single waterlevel and asingle constraint (typically, a power constraint), some algorithmshave been proposed in the literature to compute the solutionsnumerically. However, some other optimization problems result insignificantly more complicated waterfilling solutions that includemultiple waterlevels and multiple constraints. For such cases, itmay still be possible to obtain practical algorithms to evaluate thesolutions numerically but only after a painstaking inspection ofthe specific waterfilling structure. In addition, a unified view ofthe different types of waterfilling solutions and the correspondingpractical algorithms is missing.The purpose of this paper is twofold. On the one hand, itoverviews the waterfilling results existing in the literature from aunified viewpoint. On the other hand, it bridges the gap betweena wide family of waterfilling solutions and their efficient implementationin practice; to be more precise, it provides a practicalalgorithm to evaluate numerically a general waterfilling solution,which includes the currently existing waterfilling solutions andothers that may possibly appear in future problems.
Resumo:
The well-known structure of an array combiner along with a maximum likelihood sequence estimator (MLSE) receiveris the basis for the derivation of a space-time processor presentinggood properties in terms of co-channel and intersymbol interferencerejection. The use of spatial diversity at the receiver front-endtogether with a scalar MLSE implies a joint design of the spatialcombiner and the impulse response for the sequence detector. Thisis faced using the MMSE criterion under the constraint that thedesired user signal power is not cancelled, yielding an impulse responsefor the sequence detector that is matched to the channel andcombiner response. The procedure maximizes the signal-to-noiseratio at the input of the detector and exhibits excellent performancein realistic multipath channels.
Resumo:
This paper presents a relational positioning methodology for flexibly and intuitively specifying offline programmed robot tasks, as well as for assisting the execution of teleoperated tasks demanding precise movements.In relational positioning, the movements of an object can be restricted totally or partially by specifying its allowed positions in terms of a set of geometric constraints. These allowed positions are found by means of a 3D sequential geometric constraint solver called PMF – Positioning Mobile with respect to Fixed. PMF exploits the fact that in a set of geometric constraints, the rotational component can often be separated from the translational one and solved independently.
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
In vivo fetal magnetic resonance imaging provides aunique approach for the study of early human braindevelopment [1]. In utero cerebral morphometry couldpotentially be used as a marker of the cerebralmaturation and help to distinguish between normal andabnormal development in ambiguous situations. However,this quantitative approach is a major challenge becauseof the movement of the fetus inside the amniotic cavity,the poor spatial resolution provided by very fast MRIsequences and the partial volume effect. Extensiveefforts are made to deal with the reconstruction ofhigh-resolution 3D fetal volumes based on severalacquisitions with lower resolution [2,3,4]. Frameworkswere developed for the segmentation of specific regionsof the fetal brain such as posterior fossa, brainstem orgerminal matrix [5,6], or for the entire brain tissue[7,8], applying the Expectation-Maximization MarkovRandom Field (EM-MRF) framework. However, many of theseprevious works focused on the young fetus (i.e. before 24weeks) and use anatomical atlas priors to segment thedifferent tissue or regions. As most of the gyraldevelopment takes place after the 24th week, acomprehensive and clinically meaningful study of thefetal brain should not dismiss the third trimester ofgestation. To cope with the rapidly changing appearanceof the developing brain, some authors proposed a dynamicatlas [8]. To our opinion, this approach however faces arisk of circularity: each brain will be analyzed /deformed using the template of its biological age,potentially biasing the effective developmental delay.Here, we expand our previous work [9] to proposepost-processing pipeline without prior that allow acomprehensive set of morphometric measurement devoted toclinical application. Data set & Methods: Prenatal MRimaging was performed with a 1-T system (GE MedicalSystems, Milwaukee) using single shot fast spin echo(ssFSE) sequences (TR 7000 ms, TE 180 ms, FOV 40 x 40 cm,slice thickness 5.4mm, in plane spatial resolution1.09mm). For each fetus, 6 axial volumes shifted by 1 mmwere acquired under motherâeuro?s sedation (about 1min pervolume). First, each volume is segmentedsemi-automatically using region-growing algorithms toextract fetal brain from surrounding maternal tissues.Inhomogeneity intensity correction [10] and linearintensity normalization are then performed. Brain tissues(CSF, GM and WM) are then segmented based on thelow-resolution volumes as presented in [9]. Ahigh-resolution image with isotropic voxel size of 1.09mm is created as proposed in [2] and using B-splines forthe scattered data interpolation [11]. Basal gangliasegmentation is performed using a levet setimplementation on the high-resolution volume [12]. Theresulting white matter image is then binarized and givenas an input in FreeSurfer software(http://surfer.nmr.mgh.harvard.edu) to providetopologically accurate three-dimensional reconstructionsof the fetal brain according to the local intensitygradient. References: [1] Guibaud, Prenatal Diagnosis29(4) (2009). [2] Rousseau, Acad. Rad. 13(9), 2006. [3]Jiang, IEEE TMI 2007. [4] Warfield IADB, MICCAI 2009. [5]Claude, IEEE Trans. Bio. Eng. 51(4) 2004. [6] Habas,MICCAI 2008. [7] Bertelsen, ISMRM 2009. [8] Habas,Neuroimage 53(2) 2010. [9] Bach Cuadra, IADB, MICCAI2009. [10] Styner, IEEE TMI 19(39 (2000). [11] Lee, IEEETrans. Visual. And Comp. Graph. 3(3), 1997. [12] BachCuadra, ISMRM 2010.
Resumo:
Paperikoneen hoitotason muodostaa kantava kehikkorakenne sekä kehikkorakenteiden tuennat. Kantavan kehikkorakenteen valmistuson perinteistä hitsaavaa konepajatuotantoa. Työssä esitetään ratkaisuja valmistuksen kehittämiseen, näiden avulla on tavoitteena pudottaa valmistuksen läpimenoaikaa ja lopullisesti vastata kiristyneeseen kilpailutilanteeseen. Laajempana tavoitteena on osoittaa automaation mahdollisuudet kyseiseen tuotantoon. Alumiinisten rakenteiden toimitukset ovat jatkuvasti kasvaneet. Tähän rakenneryhmään haluttiin panostaa ja tehdä tuotteesta markkinoiden paras sekä kehittää valmistusta, jotta voitaisiin vastata kilpailuun ja lyhyisiin toimitusaikoihin. Työssä esitettävät ratkaisut on suunnattu alumiinisen kantavan kehikon valmistuksen kehittämiseen. Valmistuksen kehittäminen käsittää tuotteen rakennekomponenttien suunnittelemista valmistuksen kannalta, automaation lisäämistä osavalmistukseen sekä hitsauksen kevytmekanisointia.
Resumo:
This report illustrates a comparative study of various joining methods involved in sheet metal production. In this report it shows the selection of joining methods, which includes comparing the advantages and disadvantages of a method over the other ones and choosing the best method for joining. On the basis of various joining process from references, a table is generated containing set of criterion that helps in evaluation of various sheet metal joining processes and in selecting the most suitable process for a particular product. Three products are selected and a comprehensive study of the joining methods is analyzed with the help of various parameters. The table thus is the main part of the analysis process of this study and can be advanced with the beneficial results. It helps in a better and easy understanding and comparing the various methods, which provides the foundation of this study and analysis. The suitability of the joining method for various types of cases of different sheet metal products can be tested with the help of this table. The sections of the created table display the requirements of manufacturing. The important factor has been considered and given focus in the table, as how the usage of these parameters is important in percentages according to particular or individual case. The analysis of the methods can be extended or altered by changing the parameters according to the constraint. The use of this table is demonstrated by pertaining the cases from sheet metal production.
Resumo:
BACKGROUND: So far, none of the existing methods on Murray's law deal with the non-Newtonian behavior of blood flow although the non-Newtonian approach for blood flow modelling looks more accurate. MODELING: In the present paper, Murray's law which is applicable to an arterial bifurcation, is generalized to a non-Newtonian blood flow model (power-law model). When the vessel size reaches the capillary limitation, blood can be modeled using a non-Newtonian constitutive equation. It is assumed two different constraints in addition to the pumping power: the volume constraint or the surface constraint (related to the internal surface of the vessel). For a seek of generality, the relationships are given for an arbitrary number of daughter vessels. It is shown that for a cost function including the volume constraint, classical Murray's law remains valid (i.e. SigmaR(c) = cste with c = 3 is verified and is independent of n, the dimensionless index in the viscosity equation; R being the radius of the vessel). On the contrary, for a cost function including the surface constraint, different values of c may be calculated depending on the value of n. RESULTS: We find that c varies for blood from 2.42 to 3 depending on the constraint and the fluid properties. For the Newtonian model, the surface constraint leads to c = 2.5. The cost function (based on the surface constraint) can be related to entropy generation, by dividing it by the temperature. CONCLUSION: It is demonstrated that the entropy generated in all the daughter vessels is greater than the entropy generated in the parent vessel. Furthermore, it is shown that the difference of entropy generation between the parent and daughter vessels is smaller for a non-Newtonian fluid than for a Newtonian fluid.