989 resultados para Squares


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ENGLISH: Catches of skipjack tuna supporting major fisheries in parts of the western, central and eastern Pacific Ocean have increased in recent years; thus, it is important to examine the dynamics of the fishery to determine man's effect on the abundance of the stocks. A general linear hypothesis model was developed to standardize fishing effort to a single vessel size and gear type. Standardized effort was then used to compute an index of abundance which accounts for seasonal variability in the fishing area. The indices of abundance were highly variable from year to year in both the northern and southern areas of the fishery but indicated a generally higher abundance in the south. Data from 438 fish tagged and recovered in the eastern Pacific Ocean were used to compute growth curves. A least-squares technique was used to estimate the parameters of the von Bertalanffy growth function. Two estimates of the parameters were made by analyzing the same data in different ways. For the first set of estimates, K= 0.819 on an annual instantaneous basis and L= 729 mm; for the second, K = 0.431 and L=881. These compared well with estimates derived using the Chapman-Richards growth function, which includes the von Bertalanffy function as a special case. It was concluded that the latter function provided an adequate empirical fit to the skipjack data since the more complicated function did not significantly improve the fit. Tagging data from three cruises involving 8852 releases and 1777 returns were used to compute mortality rates during the time the fish were in the fishery. Two models were used in the analyses. The best estimates of the catchability coefficient (q) in the north and south were 8.4 X 10- 4 and 5.0 X 10- 5 respectively. The other loss rate (X), which included losses due to emigration, natural mortality and mortality due to carrying a tag, was 0.14 on an annual instantaneous basis for both areas. To detect the possible effect of fishing on abundance and total yield, the relation between abundance and effort and between total catch and effort was examined. It was found that at levels of intensity observed in the fishery, fishing does not appear to have had any measurable effect on the stocks. It was concluded therefore that the total catch could probably be increased by substantially increasing total effort beyond the present level, and that the fluctuations in abundance are fishery-independent. The estimates of growth, mortality and fishing effort were used to compute yield-per-recruitment isopleths for skipjack in both the northern and southern areas. For a size at first entry of about 425 mm, the yield per recruitment was calculated at 3 pounds in the north and 1.5 pounds in the south. In both areas it would be possible to increase the yield per recruitment by increasing fishing effort. It was not possible to assess potential production of the skipjack stocks fished in the eastern Pacific, except to note that the fishery had not affected their abundance and that they were certainly under-exploited. It was concluded that the northern and southern stocks could support increased harvests, especially the latter. SPANISH: Las capturas de atún barrilete que sostienen las pesquerías principales de la parte occidental, central y oriental del Océano Pacífico han aumentado en los últimos años; así que es importante examinar la dinámica de la pesquería para determinar el efecto que pueda tener sobre la abundancia de los stocks. Se desarrolló un modelo hipotético, lineal para standardizar el esfuerzo de pesca a un solo tamaño de barco y tipo de arte. Luego se usó el esfuerzo standardizado para computar un índice de la abundancia que pueda dar razón de la variabilidad estacional en el área de pesca. Los índices de la abundancia variaron mucho de un año a otro tanto en el área septentrional como en el área meridional de la pesquería, pero indicaron una abundancia generalmente superior en el sur. Se emplearon los datos de 438 peces marcados y recuperados en el Océano Pacífico oriental para computar las curvas de crecimiento. Una técnica de mínimos cuadrados fue usada para estimar los parámetros de la función de crecimiento de van Bertalanffy. Se hicieron dos estimativos de los parámetros mediante el análisis de los mismos datos, de diferente manera. Para el primer juego de estimativos, K=0.819 sobre una base anual instantánea y L∞=729 mm; para el segundo, K=0.431 y L∞=881. Estos se correlacionaron bien con los estimativos obtenidos usando la función de crecimiento de Chapman-Richards, que incluye la de von Bertalanffy como un caso especial. Se decidió que la última función proveía un ajuste empírico, adecuado a los datos del barrilete, ya que la función más complicada no mejoró significativamente el ajuste. Los datos de marcación de tres cruceros incluyendo 8852 liberaciones y 1777 retornos, fueron usados para computar las tasas de mortalidad durante el tiempo en que los peces estuvieron en la pesquería. Se usaron dos modelos en los análisis. Los mejores estimativos del coeficiente de capturabilidad (q) en el norte y en el sur fueron 8.4 X 10-4 y 5.0 X 10-5 , respectivamente. La otra tasa de pérdida (X), la cual incluyó pérdidas debidas a la emigración, mortalidad natural y mortalidad debida a llevar una marca, fue 0.14 sobre una base anual instantánea para las dos áreas. Con el fin de descubrir el efecto que posiblemente pueda tener la pesca sobre la abundancia y el rendimiento total, se examinó la relación entre la abundancia y el esfuerzo y entre la captura total y el esfuerzo. Se encontró que a los niveles de la intensidad observada en la pesquería, la pesca no parece haber tenido ningún efecto perceptible en los stocks. Por lo tanto se decidió que mediante un aumento substancial del esfuerzo total, más allá del nivel actual, la captura total probablemente podría aumentarse, y que las fluctuaciones de la abundancia son independientes de la pesquería. Los estimativos del crecimiento, mortalidad y esfuerzo de pesca fueron usados para computar las isopletas del rendimiento por recluta del barrilete, tanto en las áreas del norte como del sur. Para una talla de primera entrada de unos 425 mm, el rendimiento por recluta fue calculado en 3 libras en el norte y 1.5 libras en el sur. En ambas áreas sería posible aumentar el rendimiento por recluta mediante un aumento del esfuerzo de pesca. No fue posible determinar la producción potencial de los stocks del barrilete pescado en el Pacífico oriental, excepto para observar que la pesquería no ha afectado su abundancia y que ciertamente se encuentran subexplotados. Se concluyó que los stocks norte y sur pueden soportar un aumento en el rendimiento, especialmente este último. (PDF contains 274 pages.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The tension and compression of single-crystalline silicon nanowires (SiNWs) with different cross-sectional shapes are studied systematically using molecular dynamics simulation. The shape effects on the yield stresses are characterized. For the same surface to volume ratio, the circular cross-sectional SiNWs are stronger than the square cross-sectional ones under tensile loading, but reverse happens in compressive loading. With the atoms colored by least-squares atomic local shear strain, the deformation processes reveal that the failure modes of incipient yielding are dependent on the loading directions. The SiNWs under tensile loading slip in {111} surfaces, while the compressive loading leads the SiNWs to slip in the {110} surfaces. The present results are expected to contribute to the design of the silicon devices in nanosystems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ENGLISH: The average linear growth rate of skipjack in the eastern Pacific is less than 1 mm per day except for fish 375 to 424 mm in length at release. The growth rate shows a decrease with increasing length and increasing time at liberty. The growth rate of fish in the length range of about 43 to 57 cm is apparently more rapid in the eastern Pacific than in the western Pacific. Dsing data for the northeastern and southeastern Pacific combined, K and ~ were estimated to be 0.658 (on an annual basis) and 885 mm, respectively, by the ungrouped method and 0.829 and 846 mm, respectively, by the grouped method. Sensitivity analyses have shown however, that the estimates of these parameters are poorly determined by the sum of squares method used to derive them. Estimates of K and ~ for the eastern Pacific tend to be lower and higher, respectively, than those for the western Pacific. The average linear growth rate of yellowfin in the eastern Pacific is a little less than 1 mm per day for fish between about 25 and 100 cm in length at release. The growth appears to be most rapid in Area 2 (Revillagigedo Islands) and slowest in Areas 1 (Baja California), 5 (Central America- Colombia), and 6 (Ecuador-Peru). There is considerable variation in the growth rates of individual fish. The growth does not show a decrease with increasing length or increasing time at liberty so realistic estimates of the parameters of the von Bertalanffy or other similar equations cannot be calculated from these data. If realistic estimates of these parameters are to be secured larger fish must be tagged and released or many more long-term returns from fish to about 100 cm in length at release must be obtained. The growth patterns for the eastern Pacific, central Pacific and eastern Atlantic found by most other investigators differ from one another and from those found in the present study. Some of these differences may be real and others may be due to deficiencies in the data or the methods of analysis. Estimates obtained from tagging data are believed to be realistic provided the tags do not inhibit the growth of the fish. It appears that the growth rates of single- and double-tagged fish are the same; this indicates, though not unequivocally, that the tags do not inhibit the growth. SPANISH: La tasa media de crecimiento lineal del barrilete en el Pacífico oriental es inferior a lmm/día, excepto en el caso de peces de entre 375y 424mm de longitud de liberación. La tasa de crecimiento disminuye a medida que aumenta la longitud y el tiempo en libertad. La tasa de crecimiento de peces de entre unos 43 y 57 cm de longitud parece ser mayor en el Pacífico oriental que en el occidental. A partir de datos del Pacífico nororiental y suroriental combinados, se estimaron K y loo en 0.658 (anual) y 885mm, respectivamente, usando el método no agrupado, y 0.829 y 846mm, respectivamente, usando el método agrupado. Sin embargo, los análisis de sensitividad han demostrado que el método de suma de cuadrados utilizado para derivar las estimaciones de estos parámetros las determina con poca precisión. Las estimaciones de K y loo para el Pacífico oriental suelen ser inferiores y superiores, respectivamente, a los del Pacífico occidental. La tasa media de crecimiento lineal del aleta amarilla en el Pacífico oriental es ligeramente inferior a lmm/día para los peces de entre unos 25y 100cmde longitud de liberación. El crecimiento parece ser más rápido en el Area 2(Islas Revillagigedo),y más lento en las Areas 1(Baja California), 5 (Centroamérica-Colombia), y 6 (Ecuador-Perú). Las tasas de crecimiento de peces individuales varían considerablemente. El crecimiento no muestra una disminuciónconun aumento en la longitud o en el tiempo en libertad, y por consecuencia no se se pueden calcular estimaciones realistas de los parámetros de la ecuación de von Bertalanffy u otras ecuaciones similares a partir de estos datos. Para obtener estimaciones realistas de estos parámetros sería necesario marcar peces mayores u obtener muchas más devoluciones a largo plazo de marcas de peces de unos 100cm de longitud de liberación. Los patrones de crecimiento correspondientes al Pacífico oriental, Pacífico central, y Atlántico oriental descubiertos por la mayoría de los investigadores son diferentes entre síy también de los del presente estudio. Es posibleque algunas de estas diferencias sean verdaderas, mientras que otras se deban a faltas en los datos on en los métodos analíticos utilizados. Se considera que las estimaciones obtenidas a partir de los datos de marcado son realistas, suponiendo siempre que las marcas no impidan el crecimiento de los peces. Parece ser que las tasas de crecimiento de peces con una marca y con dos son idénticas, lo cual indica, aunque sin certeza total, que las marcas no ejercen tal efecto. (PDF contains 76 pages.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ENGLISH: Monthly estimates of the abundance of yellowfin tuna by age groups and regions within the eastern Pacific Ocean during 1970-1988 are made, using purse-seine catch rates, length-frequency samples, and results from cohort analysis. The numbers of individuals caught of each age group in each logged purse-seine set are estimated, using the tonnage from that set and length-frequency distribution from the "nearest" length-frequency sample(s). Nearest refers to the closest length frequency sample(s) to the purse-seine set in time, distance, and set type (dolphin associated, floating object associated, skipjack associated, none of these, and some combinations). Catch rates are initially calculated as the estimated number of individuals of the age group caught per hour of searching. Then, to remove the effects of set type and vessel speed, they are standardized, using separate weiznted generalized linear models for each age group. The standardized catch rates at the center of each 2.5 0 quadrangle-month are estimated, using locally-weighted least-squares regressions on latitude, longitude and date, and then combined into larger regions. Catch rates within these regions are converted to numbers of yellowfin, using the mean age composition from cohort analysis. The variances of the abundance estimates within regions are large for 0-, 1-, and 5-year-olds, but small for 1.5- to 4-year-olds, except during periods of low fishing activity. Mean annual catch rate estimates for the entire eastern Pacific Ocean are significantly positively correlated with mean abundance estimates from cohort analysis for age groups ranging from 1.5 to 4 years old. Catch-rate indices of abundance by age are expected to be useful in conjunction with data on reproductive biology to estimate total egg production within regions. The estimates may also be useful in understanding geographic and temporal variations in age-specific availability to purse seiners, as well as age-specific movements. SPANISH: Se calculan estimaciones mensuales de la abundancia del atún aleta amarilla por grupos de edad y regiones en el Océano Pacífico oriental durante 1970-1988, usando tasas de captura cerquera, muestras de frecuencia de talla, y los resultados del análisis de cohortes. Se estima el número de individuos capturados de cada grupo de edad en cada lance cerquero registrado, usando el tonelaje del lance en cuestión y la distribución de frecuencia de talla de la(s) muestra(s) de frecuencia de talla "más cercana/s)," "Más cercana" significa la(s) muestra(s) de frecuencia de talla más parecida(s) al lance cerquero en cuanto a fecha, distancia, y tipo de lance (asociado con delfines, con objeto flotante, con barrilete, con ninguno de éstos, y algunas combinaciones). Se calculan inicialmente las tasas de captura como el número estimado de individuos del grupo de edad capturado por hora de búsqueda. A continuación, para eliminar los efectos del tipo de lance y la velocidad del barco, se estandardizan dichas tasas, usando un modelo lineal generalizado ponderado, para cada grupo por separado. Se estima la tasa de captura estandardizada al centro de cada cuadrángulo de 2.5°-mes, usando regresiones de mínimos cuadrados ponderados localmente por latitud, longitud, y fecha, y entonces combinándolas en regiones mayores. Se convierten las tasas de captura dentro de estas regiones en números de aletas amarillas individuales, usando el número promedio por edad proveniente del análisis de cohortes. Las varianzas de las estimaciones de la abundancia dentro de las regiones son grandes para los peces de O, 1, Y5 años de edad, pero pequeñas para aquellos de entre 1.5 Y4 años de edad, excepto durante períodos de poca actividad pesquera. Las estimaciones de la tasa de captura media anual para todo el Océano Pacífico oriental están correlacionadas positivamente de forma significativa con las estimaciones de la abundancia media del análisis de las cohortes para los grupos de edad de entre 1.5 y 4 años. Se espera que los índices de abundancia por edad basados en las tasas de captura sean útiles, en conjunto con datos de la biología reproductiva, para estimar la producción total de huevos por regiones. Las estimaciones podrían asimismo ser útiles para la comprensión de las variaciones geográficas y temporales de la disponibilidad específica por edad a los barcos cerqueros, y también las migraciones específicas por edad. (PDF contains 35 pages.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monthly fish surveys were made from 1997-1999 in the Kenyan waters of Lake Victoria in order to estimate the magnitude of fisheries resources. Sample sites were defined using GPS while thirty minute hauls in alternate grid squares were made. Demersal fish biomass was estimated using the swept area method, while for trawling two different trawl nets were used. Collected fish was sorted into species, measured (TL) and weighed. Smaller fish were mixed on deck and sub-samples taken. Sexual maturity stages of fish were also observed. Areas with consistency high catches were located outside major urban and riverine influence where most artisanal fishermen were concentrated. Very low catches were obtained from areas that had recently been covered by water hyacinth Eichhornia crassipes

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A series of eight related analogs of distamycin A has been synthesized. Footprinting and affinity cleaving reveal that only two of the analogs, pyridine-2- car box amide-netropsin (2-Py N) and 1-methylimidazole-2-carboxamide-netrops in (2-ImN), bind to DNA with a specificity different from that of the parent compound. A new class of sites, represented by a TGACT sequence, is a strong site for 2-PyN binding, and the major recognition site for 2-ImN on DNA. Both compounds recognize the G•C bp specifically, although A's and T's in the site may be interchanged without penalty. Additional A•T bp outside the binding site increase the binding affinity. The compounds bind in the minor groove of the DNA sequence, but protect both grooves from dimethylsulfate. The binding evidence suggests that 2-PyN or 2-ImN binding induces a DNA conformational change.

In order to understand this sequence specific complexation better, the Ackers quantitative footprinting method for measuring individual site affinity constants has been extended to small molecules. MPE•Fe(II) cleavage reactions over a 10^5 range of free ligand concentrations are analyzed by gel electrophoresis. The decrease in cleavage is calculated by densitometry of a gel autoradiogram. The apparent fraction of DNA bound is then calculated from the amount of cleavage protection. The data is fitted to a theoretical curve using non-linear least squares techniques. Affinity constants at four individual sites are determined simultaneously. The distamycin A analog binds solely at A•T rich sites. Affinities range from 10^(6)- 10^(7)M^(-1) The data for parent compound D fit closely to a monomeric binding curve. 2-PyN binds both A•T sites and the TGTCA site with an apparent affinity constant of 10^(5) M^(-1). 2-ImN binds A•T sites with affinities less than 5 x 10^(4) M^(-1). The affinity of 2-ImN for the TGTCA site does not change significantly from the 2-PyN value. At the TGTCA site, the experimental data fit a dimeric binding curve better than a monomeric curve. Both 2-PyN and 2-ImN have substantially lower DNA affinities than closely related compounds.

In order to probe the requirements of this new binding site, fourteen other derivatives have been synthesized and tested. All compounds that recognize the TGTCA site have a heterocyclic aromatic nitrogen ortho to the N or C-terminal amide of the netropsin subunit. Specificity is strongly affected by the overall length of the small molecule. Only compounds that consist of at least three aromatic rings linked by amides exhibit TGTCA site binding. Specificity is only weakly altered by substitution on the pyridine ring, which correlates best with steric factors. A model is proposed for TGTCA site binding that has as its key feature hydrogen bonding to both G's by the small molecule. The specificity is determined by the sequence dependence of the distance between G's.

One derivative of 2-PyN exhibits pH dependent sequence specificity. At low pH, 4-dimethylaminopyridine-2-carboxamide-netropsin binds tightly to A•T sites. At high pH, 4-Me_(2)NPyN binds most tightly to the TGTCA site. In aqueous solution, this compound protonates at the pyridine nitrogen at pH 6. Thus presence of the protonated form correlates with A•T specificity.

The binding site of a class of eukaryotic transcriptional activators typified by yeast protein GCN4 and the mammalian oncogene Jun contains a strong 2-ImN binding site. Specificity requirements for the protein and small molecule are similar. GCN4 and 2-lmN bind simultaneously to the same binding site. GCN4 alters the cleavage pattern of 2-ImN-EDTA derivative at only one of its binding sites. The details of the interaction suggest that GCN4 alters the conformation of an AAAAAAA sequence adjacent to its binding site. The presence of a yeast counterpart to Jun partially blocks 2-lmN binding. The differences do not appear to be caused by direct interactions between 2-lmN and the proteins, but by induced conformational changes in the DNA protein complex. It is likely that the observed differences in complexation are involved in the varying sequence specificity of these proteins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A uniform submicron periodic square structure was fabricated on the surface of ZnO by a technique of two linearly polarized femtosecond laser beams with orthogonal polarizations ablating material alternately. The formed two-dimensional ordering submicron structure consists of close-packed submicron squares with a spacial periodicity of 290 nm, which arises from the intercrossing of two orthogonal submicron ripple structures induced by the two beams respectively. The result demonstrates a noninterference effect of two-beam ablation based on the alternate technique, which should come from the polarization-dependent enhancement of the subwavelength ripple structure and the large interval of two alternate pulses. This two-beam alternate ablation technique is expected to open up prospects for the submicron fabrication of wide-bandgap materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I. Trimesic acid (1, 3, 5-benzenetricarboxylic acid) crystallizes with a monoclinic unit cell of dimensions a = 26.52 A, b = 16.42 A, c = 26.55 A, and β = 91.53° with 48 molecules /unit cell. Extinctions indicated a space group of Cc or C2/c; a satisfactory structure was obtained in the latter with 6 molecules/asymmetric unit - C54O36H36 with a formula weight of 1261 g. Of approximately 12,000 independent reflections within the CuKα sphere, intensities of 11,563 were recorded visually from equi-inclination Weissenberg photographs.

The structure was solved by packing considerations aided by molecular transforms and two- and three-dimensional Patterson functions. Hydrogen positions were found on difference maps. A total of 978 parameters were refined by least squares; these included hydrogen parameters and anisotropic temperature factors for the C and O atoms. The final R factor was 0.0675; the final "goodness of fit" was 1.49. All calculations were carried out on the Caltech IBM 7040-7094 computer using the CRYRM Crystallographic Computing System.

The six independent molecules fall into two groups of three nearly parallel molecules. All molecules are connected by carboxylto- carboxyl hydrogen bond pairs to form a continuous array of sixmolecule rings with a chicken-wire appearance. These arrays bend to assume two orientations, forming pleated sheets. Arrays in different orientations interpenetrate - three molecules in one orientation passing through the holes of three parallel arrays in the alternate orientation - to produce a completely interlocking network. One third of the carboxyl hydrogen atoms were found to be disordered.

II. Optical transforms as related to x-ray diffraction patterns are discussed with reference to the theory of Fraunhofer diffraction.

The use of a systems approach in crystallographic computing is discussed with special emphasis on the way in which this has been done at the California Institute of Technology.

An efficient manner of calculating Fourier and Patterson maps on a digital computer is presented. Expressions for the calculation of to-scale maps for standard sections and for general-plane sections are developed; space-group-specific expressions in a form suitable for computers are given for all space groups except the hexagonal ones.

Expressions for the calculation of settings for an Eulerian-cradle diffractometer are developed for both the general triclinic case and the orthogonal case.

Photographic materials on pp. 4, 6, 10, and 20 are essential and will not reproduce clearly on Xerox copies. Photographic copies should be ordered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.

Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.

It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.

A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.

Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.