904 resultados para Squares
Resumo:
While treatment of keloids and hypertrophic scars normally shows modest results, we found that treatment with bleomycin was more promising. The present study was divided into two parts. In the first part the aim was to show the results using a combination of bleomycin and triamcinolone acetonide per cm2 (BTA). In the second part the objective was to determine the response to both drugs in large keloids that were divided into 1 cm2 squares, treating each square with the dose previously used. In the first part of the study, the clinical response of 37 keloids ranging from 0.3 to 1.8 cm2 treated with BTA were followed up over a period of 1- 2 years. 0.375 IU bleomycin and 4 mg triamcinolone acetonide were injected every 3 months. In the second part of the study we reviewed the clinical response in six patients with large keloids. The monthly dose administered never exceeded 3 IU of bleomycin. The first study showed 36 keloids (97.29%) softening after the first dose. In the second study, 5 showed different responses (the response was complete in the four smaller keloids). The largest keloid needed 9 doses to achieve an improvement of 70%. In conclusion, combined treatment with 0.375 IU of bleomycin and 4mg of triamcinolone acetonide to 1 cm2 was considered to be an acceptable procedure for the treatment of keloids. The best results were obtained in keloids over 1 cm2 or when divided into 1 cm2 square areas. Larger series need to be performed in order to confirm these results..
Resumo:
Na pesquisa aqui relatada, visa-se investigar os antecedentes da intenção de uso de sistemas de home broker sob a ótica dos investidores do mercado acionário. Para atingir esse objetivo, por meio de referencial teórico baseado em teorias de aceitação de sistemas de informação, difusão da inovação, confiança em ambientes virtuais e satisfação do usuário, foi elaborado um modelo teórico e foram propostas hipóteses de pesquisa. Por meio de técnicas de equações estruturais baseadas em Partial Least Squares (PLS), a partir de 152 questionários válidos, coletados via web survey junto a investidores do mercado acionário brasileiro, foram testados o modelo proposto e as hipóteses de pesquisa. Identificaram-se, assim, os fatores compatibilidade, utilidade percebida e facilidade de uso percebida como antecedentes estatisticamente significantes do fator satisfação do usuário com o sistema de home broker, o qual, por sua vez, teve efeito estatisticamente significante na intenção de uso do sistema. São apresentadas, ainda, as implicações acadêmicas e gerenciais do trabalho, assim como suas limitações e uma agenda de pesquisa para essa importante área do conhecimento.
Resumo:
Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI models in preference to structural equation models (SEM) because they do not rely on strict assumptions about the data. However, this choice was based upon some misconceptions about the use of SEM’s and does not take into consideration more recent advances in SEM, including estimation methods that are robust to non-normality and missing data. In this paper, both SEM and PLS approaches were compared by evaluating perceptions of the Isle of Man Post Office Products and Customer service using a CSI format. The new robust SEM procedures were found to be advantageous over PLS. Product quality was found to be the only driver of customer satisfaction, while image and satisfaction were the only predictors of loyalty, thus arguing for the specificity of postal services
Resumo:
We present an experimental and numerical study on the influence that particle aspect ratio has on the mechanical and structural properties of granular packings. For grains with maximal symmetry (squares), the stress propagation in the packing localizes forming chainlike forces analogous to the ones observed for spherical grains. This scenario can be understood in terms of stochastic models of aggregation and random multiplicative processes. As the grains elongate, the stress propagation is strongly affected. The interparticle normal force distribution tends toward a Gaussian, and, correspondingly, the force chains spread leading to a more uniform stress distribution reminiscent of the hydrostatic profiles known for standard liquids
Resumo:
The electron hole transfer (HT) properties of DNA are substantially affected by thermal fluctuations of the π stack structure. Depending on the mutual position of neighboring nucleobases, electronic coupling V may change by several orders of magnitude. In the present paper, we report the results of systematic QM/molecular dynamic (MD) calculations of the electronic couplings and on-site energies for the hole transfer. Based on 15 ns MD trajectories for several DNA oligomers, we calculate the average coupling squares 〈 V2 〉 and the energies of basepair triplets X G+ Y and X A+ Y, where X, Y=G, A, T, and C. For each of the 32 systems, 15 000 conformations separated by 1 ps are considered. The three-state generalized Mulliken-Hush method is used to derive electronic couplings for HT between neighboring basepairs. The adiabatic energies and dipole moment matrix elements are computed within the INDO/S method. We compare the rms values of V with the couplings estimated for the idealized B -DNA structure and show that in several important cases the couplings calculated for the idealized B -DNA structure are considerably underestimated. The rms values for intrastrand couplings G-G, A-A, G-A, and A-G are found to be similar, ∼0.07 eV, while the interstrand couplings are quite different. The energies of hole states G+ and A+ in the stack depend on the nature of the neighboring pairs. The X G+ Y are by 0.5 eV more stable than X A+ Y. The thermal fluctuations of the DNA structure facilitate the HT process from guanine to adenine. The tabulated couplings and on-site energies can be used as reference parameters in theoretical and computational studies of HT processes in DNA
Resumo:
Estudi realitzat a partir d’una estada a l’Institut Desenvolupat a School of Comparative American Studies adscrit a la University of Warwick, Regne Unit, entre 2011 i 2012. Aquest projecte analitza en primer lloc la mobilització popular del primer liberalisme i la formació de les primeres organitzacions polítiques liberals que es constituïren a partir de les societats secretes i es propagaren a través dels principals centres de sociabilitat liberal: les societats patriòtiques. En segon lloc mitjançant l’estudi de la mobilitat dels liberals entre l’Espanya metropolitana i el virregnat de Nueva Espanya demostra com es dibuixà un nou model polític basat en el federalisme. El tercer aspecte d’anàlisi és com els exiliats catalans a Anglaterra reberen el suport de la Foreign Bible Society perquè havia mantingut contactes des dels primers anys vint amb l’alt clergat espanyol. El darrer aspecte de la recerca abasta l’estudi de l’espai urbà en relació amb les pràctiques polítiques dels ciutadans a partir de l’anàlisi de la formació i ampliació de les places de la ciutat de Barcelona durant la primera meitat del segle XIX.
Resumo:
We present building blocks for algorithms for the efficient reduction of square factor, i.e. direct repetitions in strings. So the basic problem is this: given a string, compute all strings that can be obtained by reducing factors of the form zz to z. Two types of algorithms are treated: an offline algorithm is one that can compute a data structure on the given string in advance before the actual search for the square begins; in contrast, online algorithms receive all input only at the time when a request is made. For offline algorithms we treat the following problem: Let u and w be two strings such that w is obtained from u by reducing a square factor zz to only z. If we further are given the suffix table of u, how can we derive the suffix table for w without computing it from scratch? As the suffix table plays a key role in online algorithms for the detection of squares in a string, this derivation can make the iterated reduction of squares more efficient. On the other hand, we also show how a suffix array, used for the offline detection of squares, can be adapted to the new string resulting from the deletion of a square. Because the deletion is a very local change, this adaption is more eficient than the computation of the new suffix array from scratch.
Resumo:
Accumulation of fat in the liver increases the risk to develop fibrosis and cirrhosis and is associated with development of the metabolic syndrome. Here, to identify genes or gene pathways that may underlie the genetic susceptibility to fat accumulation in liver, we studied A/J and C57Bl/6 mice that are resistant and sensitive to diet-induced hepatosteatosis and obesity, respectively. We performed comparative transcriptomic and lipidomic analysis of the livers of both strains of mice fed a high fat diet for 2, 10, and 30 days. We found that resistance to steatosis in A/J mice was associated with the following: (i) a coordinated up-regulation of 10 genes controlling peroxisome biogenesis and β-oxidation; (ii) an increased expression of the elongase Elovl5 and desaturases Fads1 and Fads2. In agreement with these observations, peroxisomal β-oxidation was increased in livers of A/J mice, and lipidomic analysis showed increased concentrations of long chain fatty acid-containing triglycerides, arachidonic acid-containing lysophosphatidylcholine, and 2-arachidonylglycerol, a cannabinoid receptor agonist. We found that the anti-inflammatory CB2 receptor was the main hepatic cannabinoid receptor, which was highly expressed in Kupffer cells. We further found that A/J mice had a lower pro-inflammatory state as determined by lower plasma levels and IL-1β and granulocyte-CSF and reduced hepatic expression of their mRNAs, which were found only in Kupffer cells. This suggests that increased 2-arachidonylglycerol production may limit Kupffer cell activity. Collectively, our data suggest that genetic variations in the expression of peroxisomal β-oxidation genes and of genes controlling the production of an anti-inflammatory lipid may underlie the differential susceptibility to diet-induced hepatic steatosis and pro-inflammatory state.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
Geoelectrical techniques are widely used to monitor groundwater processes, while surprisingly few studies have considered audio (AMT) and radio (RMT) magnetotellurics for such purposes. In this numerical investigation, we analyze to what extent inversion results based on AMT and RMT monitoring data can be improved by (1) time-lapse difference inversion; (2) incorporation of statistical information about the expected model update (i.e., the model regularization is based on a geostatistical model); (3) using alternative model norms to quantify temporal changes (i.e., approximations of l(1) and Cauchy norms using iteratively reweighted least-squares), (4) constraining model updates to predefined ranges (i.e., using Lagrange Multipliers to only allow either increases or decreases of electrical resistivity with respect to background conditions). To do so, we consider a simple illustrative model and a more realistic test case related to seawater intrusion. The results are encouraging and show significant improvements when using time-lapse difference inversion with non l(2) model norms. Artifacts that may arise when imposing compactness of regions with temporal changes can be suppressed through inequality constraints to yield models without oscillations outside the true region of temporal changes. Based on these results, we recommend approximate l(1)-norm solutions as they can resolve both sharp and smooth interfaces within the same model. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Structural equation models are widely used in economic, socialand behavioral studies to analyze linear interrelationships amongvariables, some of which may be unobservable or subject to measurementerror. Alternative estimation methods that exploit different distributionalassumptions are now available. The present paper deals with issues ofasymptotic statistical inferences, such as the evaluation of standarderrors of estimates and chi--square goodness--of--fit statistics,in the general context of mean and covariance structures. The emphasisis on drawing correct statistical inferences regardless of thedistribution of the data and the method of estimation employed. A(distribution--free) consistent estimate of $\Gamma$, the matrix ofasymptotic variances of the vector of sample second--order moments,will be used to compute robust standard errors and a robust chi--squaregoodness--of--fit squares. Simple modifications of the usual estimateof $\Gamma$ will also permit correct inferences in the case of multi--stage complex samples. We will also discuss the conditions under which,regardless of the distribution of the data, one can rely on the usual(non--robust) inferential statistics. Finally, a multivariate regressionmodel with errors--in--variables will be used to illustrate, by meansof simulated data, various theoretical aspects of the paper.
Resumo:
The Maximum Capture problem (MAXCAP) is a decision model that addresses the issue of location in a competitive environment. This paper presents a new approach to determine which store s attributes (other than distance) should be included in the newMarket Capture Models and how they ought to be reflected using the Multiplicative Competitive Interaction model. The methodology involves the design and development of a survey; and the application of factor analysis and ordinary least squares. Themethodology has been applied to the supermarket sector in two different scenarios: Milton Keynes (Great Britain) and Barcelona (Spain).
Resumo:
We lay out a model of wage bargaining with two leading features:bargaining is ex post to relevant investments and there isindividual bargaining in firms without a Union. We compareindividual ex post bargaining to coordinated ex post bargainingand we analyze the effects on wage formation. As opposed to exante bargaining models, the costs of destroying the employmentrelationship play a crucial role in determining wages. Highfiring costs in particular yield a rent for employees. Ourtheory points to a employer size-wage effect that is independentof the production function and market power. We derive a simpleleast squares specification from the theoretical model thatallow us to estimate components of the wage premium fromcoordination. We reject the hypothesis that labor coordinationdoes not alter the extensive form of the bargaining game. Laborcoordination substantially increases bargaining power butdecreases labor's ability to pose costly threats to the firm.
Resumo:
Anomis impasta (Guenée) is a species that shows remarkable morphological and behavioral similarities with the cotton leafworm Alabama argillacea (Hübner). During two growing cotton seasons, A. impasta was observed feeding on leaves and flower bracts of cotton and monitored. Furthermore, a study was conducted under laboratory conditions to generate biological information about this species with larvae feeding cotton squares and leaves. Larvae fed on cotton squares exhibited delayed development (18.5 ± 0.18 days) and lower pupal weight (140.8 ± 2.26 mg) compared to larvae fed on cotton leaves (14.0 ± 0.07 days and 169.3 ± 2.06 mg). Thus, one generation cycle of A. impasta was obtained by feeding the larvae with cotton leaves. The mean (minimum-maximum) values for the duration of eggs, larvae and pupae were: 3.0 (3-4), 14.8 (14-18), and 9.7 (7-14) days, respectively. The viability of the eggs, larvae, and pupae were 43.7, 98.3, and 94.7%, respectively. Females lived on average 25.2 days (ranging from 15 to 37 days) and produced 869 eggs (from 4 to 1,866 eggs). The successful development and reproduction of A. impasta on cotton, especially, on the cotton leaves, suggest the potential of this species to reach a pest status in cotton. The similarities with A. argillacea, as discussed in this study, can be one of the reasons for low reference to A. impasta in the field. Therefore, the information provided here will allow researchers and growers to distinguish these two cotton defoliators.
Resumo:
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.