867 resultados para Alternative construction method
Resumo:
Moulton Hall, Chapman College, Orange, California, ca. 1975. Designed by Leason Pomeroy III & Associates of Orange, using a tilt-up concrete construction method. Completed in 1975, this 44,592 sq.ft. building is named in memory of an artist and patroness of the arts, Nellie Gail Moulton. Within this structure are the departments of Art, Communications, and Theatre/Dance as well as the Guggenheim Gallery and Waltmar Theatre. Waltmar Theatre was a gift from the late Walter and Margaret Schmid. The Guggenheim Gallery is used for the art exhibits presented by the art department and other departments on campus.
Resumo:
Moulton Hall under construction, Chapman College, Orange, California, ca. 1975. Designed by Leason Pomeroy III & Associates of Orange, using a tilt-up concrete construction method. Completed in 1975, this 44,592 sq.ft. building is named in memory of an artist and patroness of the arts, Nellie Gail Moulton. Within this structure are the departments of Art, Communications, and Theatre/Dance as well as the Guggenheim Gallery and Waltmar Theatre. Waltmar Theatre was a gift from the late Walter and Margaret Schmid. The Guggenheim Gallery is used for the art exhibits presented by the art department and other departments on campus.
Resumo:
Waltmar Theatre steps, Moulton Hall, Chapman College, Orange, California, ca. 1975. Designed by Leason Pomeroy III & Associates of Orange, using a tilt-up concrete construction method. Completed in 1975, this 44,592 sq.ft. building is named in memory of an artist and patroness of the arts, Nellie Gail Moulton. Within this structure are the departments of Art, Communications, and Theatre/Dance as well as the Guggenheim Gallery and Waltmar Theatre. Waltmar Theatre was a gift from the late Walter and Margaret Schmid. The Guggenheim Gallery is used for the art exhibits presented by the art department and other departments on campus.
Resumo:
Waltmar Theatre entrance steps, Moulton Hall, Chapman College, Orange, California, ca. 1975. Designed by Leason Pomeroy III & Associates of Orange, using a tilt-up concrete construction method. Completed in 1975, this 44,592 sq.ft. building is named in memory of an artist and patroness of the arts, Nellie Gail Moulton. Within this structure are the departments of Art, Communications, and Theatre/Dance as well as the Guggenheim Gallery and Waltmar Theatre. Waltmar Theatre was a gift from the late Walter and Margaret Schmid. The Guggenheim Gallery is used for the art exhibits presented by the art department and other departments on campus.
Resumo:
Moulton Hall, Chapman College, Orange, California, ca. 1975. Designed by Leason Pomeroy III & Associates of Orange, using a tilt-up concrete construction method. Completed in 1975, this 44,592 sq.ft. building is named in memory of an artist and patroness of the arts, Nellie Gail Moulton. Within this structure are the departments of Art, Communications, and Theatre/Dance as well as the Guggenheim Gallery and Waltmar Theatre. Waltmar Theatre was a gift from the late Walter and Margaret Schmid. The Guggenheim Gallery is used for the art exhibits presented by the art department and other departments on campus.
Resumo:
Moulton Hall, Chapman College, Orange, California, ca. 1975. Designed by Leason Pomeroy III & Associates of Orange, using a tilt-up concrete construction method. Completed in 1975, this 44,592 sq.ft. building is named in memory of an artist and patroness of the arts, Nellie Gail Moulton. Within this structure are the departments of Art, Communications, and Theatre/Dance as well as the Guggenheim Gallery and Waltmar Theatre. Waltmar Theatre was a gift from the late Walter and Margaret Schmid. The Guggenheim Gallery is used for the art exhibits presented by the art department and other departments on campus.
Resumo:
Jouée par María Elena Velasco depuis la fin des années 1960, la India María met en scène une indienne « authentique » qui, malgré son statut et ses limitations sociales, dénonce le traitement des institutions auxquelles elle est soumise : les systèmes politique, judiciaire, économique et religieux. Néanmoins, lors des premières projections des films sur le grand écran, la critique portait essentiellement sur les aspects superficiels et a réprouvé la façon dont les indiens et le Mexique étaient représentés, car jugée réactionnaire. Au début des années 1990, des chercheurs ont commencé à étudier ses films en proposant une lecture « négociée » : ils s’intéressent à l’effet humoristique produit sur le public par sa performance et ses aventures, en même temps qu’ils reconnaissent l’ambigüité du personnage et des narrations, tout en soulignant les discours ethnique et de classe. À travers l’analyse de Tonta, tonta pero no tanto (Bête, bête, mais pas trop) de Fernando Cortés (1972), Ni de aquí ni de allá (Ni d’ici ni de là-bas) de María Elena Velasco (1988), et Sor Tequila (Sœur Tequila) de Rogelio González (1977), mon mémoire contribue à cette lecture en étudiant trois sujets : le stéréotype cristallisé dans ce personnage, afin de démontrer comment celui-ci permet une critique de la société mexicaine ; les nouveaux enjeux culturels auxquels le système néolibéral affronte les autochtones ; et la transformation du masculin et du public à travers une construction alternative du féminin.
Resumo:
La implementación de metodologías de biología molecular como la reacción en cadena de la polimerasa (PCR), ha permitido la realización de diagnósticos sensibles y específicos para múltiples enfermedades, dentro de las cuales son de gran interés las infecciosas. Hasta hoy, los métodos de identificación se basan principalmente en cultivos y serología por su sensibilidad y especificidad, pero consumen tiempo y dinero. Las muestras de orina se han constituido en una alternativa no invasiva de obtención de ADN para la realización de análisis de biología molecular. Metodología: Implementación de una estrategia para la obtención de ADN a partir de muestras de orina. Las muestras fueron tomadas de niños de guardería, para documentar la presencia o no de inhibidores de PCR a través de la amplificación de genes de Citomegalovirus humano (CMVH). Resultados: En el 27,1% de las muestras analizadas se evidenció amplificación específica para CMVH, no se encontraron diferencias significativas en la presencia del virus en los tres estratos, pero sí en la intensidad de las bandas. Conclusión: Se verificó la ausencia de inhibidores de PCR mediante la amplificación del gen de la B-globina. Se estandarizó una metodología molecular para la identificación de CMVH, la cual puede ser aplicada
Resumo:
La present tesi, tot i que emmarcada dins de la teoria de les Mesures Semblança Molecular Quántica (MQSM), es deriva en tres àmbits clarament definits: - La creació de Contorns Moleculars de IsoDensitat Electrònica (MIDCOs, de l'anglès Molecular IsoDensity COntours) a partir de densitats electròniques ajustades. - El desenvolupament d'un mètode de sobreposició molecular, alternatiu a la regla de la màxima semblança. - Relacions Quantitatives Estructura-Activitat (QSAR, de l'anglès Quantitative Structure-Activity Relationships). L'objectiu en el camp dels MIDCOs és l'aplicació de funcions densitat ajustades, ideades inicialment per a abaratir els càlculs de MQSM, per a l'obtenció de MIDCOs. Així, es realitza un estudi gràfic comparatiu entre diferents funcions densitat ajustades a diferents bases amb densitats obtingudes de càlculs duts a terme a nivells ab initio. D'aquesta manera, l'analogia visual entre les funcions ajustades i les ab initio obtinguda en el ventall de representacions de densitat obtingudes, i juntament amb els valors de les mesures de semblança obtinguts prèviament, totalment comparables, fonamenta l'ús d'aquestes funcions ajustades. Més enllà del propòsit inicial, es van realitzar dos estudis complementaris a la simple representació de densitats, i són l'anàlisi de curvatura i l'extensió a macromolècules. La primera observació correspon a comprovar no només la semblança dels MIDCOs, sinó la coherència del seu comportament a nivell de curvatura, podent-se així observar punts d'inflexió en la representació de densitats i veure gràficament aquelles zones on la densitat és còncava o convexa. Aquest primer estudi revela que tant les densitats ajustades com les calculades a nivell ab initio es comporten de manera totalment anàloga. En la segona part d'aquest treball es va poder estendre el mètode a molècules més grans, de fins uns 2500 àtoms. Finalment, s'aplica part de la filosofia del MEDLA. Sabent que la densitat electrònica decau ràpidament al allunyar-se dels nuclis, el càlcul d'aquesta pot ser obviat a distàncies grans d'aquests. D'aquesta manera es va proposar particionar l'espai, i calcular tan sols les funcions ajustades de cada àtom tan sols en una regió petita, envoltant l'àtom en qüestió. Duent a terme aquest procés, es disminueix el temps de càlcul i el procés esdevé lineal amb nombre d'àtoms presents en la molècula tractada. En el tema dedicat a la sobreposició molecular es tracta la creació d'un algorisme, així com la seva implementació en forma de programa, batejat Topo-Geometrical Superposition Algorithm (TGSA), d'un mètode que proporcionés aquells alineaments que coincideixen amb la intuïció química. El resultat és un programa informàtic, codificat en Fortran 90, el qual alinea les molècules per parelles considerant tan sols nombres i distàncies atòmiques. La total absència de paràmetres teòrics permet desenvolupar un mètode de sobreposició molecular general, que proporcioni una sobreposició intuïtiva, i també de forma rellevant, de manera ràpida i amb poca intervenció de l'usuari. L'ús màxim del TGSA s'ha dedicat a calcular semblances per al seu ús posterior en QSAR, les quals majoritàriament no corresponen al valor que s'obtindria d'emprar la regla de la màxima semblança, sobretot si hi ha àtoms pesats en joc. Finalment, en l'últim tema, dedicat a la Semblança Quàntica en el marc del QSAR, es tracten tres aspectes diferents: - Ús de matrius de semblança. Aquí intervé l'anomenada matriu de semblança, calculada a partir de les semblances per parelles d'entre un conjunt de molècules. Aquesta matriu és emprada posteriorment, degudament tractada, com a font de descriptors moleculars per a estudis QSAR. Dins d'aquest àmbit s'han fet diversos estudis de correlació d'interès farmacològic, toxicològic, així com de diverses propietats físiques. - Aplicació de l'energia d'interacció electró-electró, assimilat com a una forma d'autosemblança. Aquesta modesta contribució consisteix breument en prendre el valor d'aquesta magnitud, i per analogia amb la notació de l'autosemblança molecular quàntica, assimilar-la com a cas particular de d'aquesta mesura. Aquesta energia d'interacció s'obté fàcilment a partir de programari mecanoquàntic, i esdevé ideal per a fer un primer estudi preliminar de correlació, on s'utilitza aquesta magnitud com a únic descriptor. - Càlcul d'autosemblances, on la densitat ha estat modificada per a augmentar el paper d'un substituent. Treballs previs amb densitats de fragments, tot i donar molt bons resultats, manquen de cert rigor conceptual en aïllar un fragment, suposadament responsable de l'activitat molecular, de la totalitat de l'estructura molecular, tot i que les densitats associades a aquest fragment ja difereixen degut a pertànyer a esquelets amb diferents substitucions. Un procediment per a omplir aquest buit que deixa la simple separació del fragment, considerant així la totalitat de la molècula (calcular-ne l'autosemblança), però evitant al mateix temps valors d'autosemblança no desitjats provocats per àtoms pesats, és l'ús de densitats de Forats de fermi, els quals es troben definits al voltant del fragment d'interès. Aquest procediment modifica la densitat de manera que es troba majoritàriament concentrada a la regió d'interès, però alhora permet obtenir una funció densitat, la qual es comporta matemàticament igual que la densitat electrònica regular, podent-se així incorporar dins del marc de la semblança molecular. Les autosemblances calculades amb aquesta metodologia han portat a bones correlacions amb àcids aromàtics substituïts, podent així donar una explicació al seu comportament. Des d'un altre punt de vista, també s'han fet contribucions conceptuals. S'ha implementat una nova mesura de semblança, la d'energia cinètica, la qual consisteix en prendre la recentment desenvolupada funció densitat d'energia cinètica, la qual al comportar-se matemàticament igual a les densitats electròniques regulars, s'ha incorporat en el marc de la semblança. A partir d'aquesta mesura s'han obtingut models QSAR satisfactoris per diferents conjunts moleculars. Dins de l'aspecte del tractament de les matrius de semblança s'ha implementat l'anomenada transformació estocàstica com a alternativa a l'ús de l'índex Carbó. Aquesta transformació de la matriu de semblança permet obtenir una nova matriu no simètrica, la qual pot ser posteriorment tractada per a construir models QSAR.
Resumo:
The complexity inherent in climate data makes it necessary to introduce more than one statistical tool to the researcher to gain insight into the climate system. Empirical orthogonal function (EOF) analysis is one of the most widely used methods to analyze weather/climate modes of variability and to reduce the dimensionality of the system. Simple structure rotation of EOFs can enhance interpretability of the obtained patterns but cannot provide anything more than temporal uncorrelatedness. In this paper, an alternative rotation method based on independent component analysis (ICA) is considered. The ICA is viewed here as a method of EOF rotation. Starting from an initial EOF solution rather than rotating the loadings toward simplicity, ICA seeks a rotation matrix that maximizes the independence between the components in the time domain. If the underlying climate signals have an independent forcing, one can expect to find loadings with interpretable patterns whose time coefficients have properties that go beyond simple noncorrelation observed in EOFs. The methodology is presented and an application to monthly means sea level pressure (SLP) field is discussed. Among the rotated (to independence) EOFs, the North Atlantic Oscillation (NAO) pattern, an Arctic Oscillation–like pattern, and a Scandinavian-like pattern have been identified. There is the suggestion that the NAO is an intrinsic mode of variability independent of the Pacific.
Resumo:
Annatto dyes are widely used in food and are finding increasing interest also for their application in the pharmaceutical and cosmetics industry. Bixin is the main pigment extracted from annatto seeds and accounts for 80% of the carotenoids in the outer coat of the seeds; norbixin being the water-soluble form of the bixin. Typically annatto dyes are extracted from the seeds by mechanical means or solutions of alkali, edible oil or organic solvents, or a combination of the two depending on the desired final product. In this work CGAs are investigated as an alternative separation method for the recovery of norbixin from a raw extraction solution of annatto pigments in KOH. A volume of CGAs generated from a cationic surfactant (CTAB) solution is mixed with a volume of annatto solution and when the mixture is allowed to settle it separates into the top aphron phase and the bottom liquid phase. Potassium norbixinate presented in the annatto solution will interact with the surfactant in the aphron phase, which results in the effective separation of norbixin. Recovery= 94% was achieved at a CTAB to norbixin molar ratio of 3.3. In addition a mechanism of separation is proposed here based on the separation results with the cationic surfactant and an anionic surfactant (bis-2-ethyl hexyl sulfosuccinate, AOT) and measurements of surfactant to norbixin ratio in the aphron phase; electrostatic interactions between the surfactant and norbixin molecules result in the fort-nation of a coloured complex and effective separation of norbixin. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This work proposes a unified neurofuzzy modelling scheme. To begin with, the initial fuzzy base construction method is based on fuzzy clustering utilising a Gaussian mixture model (GMM) combined with the analysis of covariance (ANOVA) decomposition in order to obtain more compact univariate and bivariate membership functions over the subspaces of the input features. The mean and covariance of the Gaussian membership functions are found by the expectation maximisation (EM) algorithm with the merit of revealing the underlying density distribution of system inputs. The resultant set of membership functions forms the basis of the generalised fuzzy model (GFM) inference engine. The model structure and parameters of this neurofuzzy model are identified via the supervised subspace orthogonal least square (OLS) learning. Finally, instead of providing deterministic class label as model output by convention, a logistic regression model is applied to present the classifier’s output, in which the sigmoid type of logistic transfer function scales the outputs of the neurofuzzy model to the class probability. Experimental validation results are presented to demonstrate the effectiveness of the proposed neurofuzzy modelling scheme.
Resumo:
Path-integral representations for a scalar particle propagator in non-Abelian external backgrounds are derived. To this aim, we generalize the procedure proposed by Gitman and Schvartsman of path-integral construction to any representation of SU(N) given in terms of antisymmetric generators. And for arbitrary representations of SU(N), we present an alternative construction by means of fermionic coherent states. From the path-integral representations we derive pseudoclassical actions for a scalar particle placed in non-Abelian backgrounds. These actions are classically analyzed and then quantized to prove their consistency.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).