948 resultados para Single Equation Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neste artigo faz-se uma análise das características distributivas do processo Kaldor-Pasinetti, assumindo-se que o setor governamental incorre em persistentes déficits que podem ser financiados através de diferentes instrumentos, como a emissão de títulos e de moeda. Através dessa abordagem é possível estudar como a atividade governamental afeta a distribuição de renda entre capitalistas e trabalhadores e assim obter generalizações do Teorema de Cambridge em que versões anteriores como as de Steedman (1972), Pasinetti (1989), Dalziel (1991) e Faria (2000) surgem como casos particulares. _________________________________________________________________________________ ABSTRACT

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-cell functional proteomics assays can connect genomic information to biological function through quantitative and multiplex protein measurements. Tools for single-cell proteomics have developed rapidly over the past 5 years and are providing unique opportunities. This thesis describes an emerging microfluidics-based toolkit for single cell functional proteomics, focusing on the development of the single cell barcode chips (SCBCs) with applications in fundamental and translational cancer research.

The microchip designed to simultaneously quantify a panel of secreted, cytoplasmic and membrane proteins from single cells will be discussed at the beginning, which is the prototype for subsequent proteomic microchips with more sophisticated design in preclinical cancer research or clinical applications. The SCBCs are a highly versatile and information rich tool for single-cell functional proteomics. They are based upon isolating individual cells, or defined number of cells, within microchambers, each of which is equipped with a large antibody microarray (the barcode), with between a few hundred to ten thousand microchambers included within a single microchip. Functional proteomics assays at single-cell resolution yield unique pieces of information that significantly shape the way of thinking on cancer research. An in-depth discussion about analysis and interpretation of the unique information such as functional protein fluctuations and protein-protein correlative interactions will follow.

The SCBC is a powerful tool to resolve the functional heterogeneity of cancer cells. It has the capacity to extract a comprehensive picture of the signal transduction network from single tumor cells and thus provides insight into the effect of targeted therapies on protein signaling networks. We will demonstrate this point through applying the SCBCs to investigate three isogenic cell lines of glioblastoma multiforme (GBM).

The cancer cell population is highly heterogeneous with high-amplitude fluctuation at the single cell level, which in turn grants the robustness of the entire population. The concept that a stable population existing in the presence of random fluctuations is reminiscent of many physical systems that are successfully understood using statistical physics. Thus, tools derived from that field can probably be applied to using fluctuations to determine the nature of signaling networks. In the second part of the thesis, we will focus on such a case to use thermodynamics-motivated principles to understand cancer cell hypoxia, where single cell proteomics assays coupled with a quantitative version of Le Chatelier's principle derived from statistical mechanics yield detailed and surprising predictions, which were found to be correct in both cell line and primary tumor model.

The third part of the thesis demonstrates the application of this technology in the preclinical cancer research to study the GBM cancer cell resistance to molecular targeted therapy. Physical approaches to anticipate therapy resistance and to identify effective therapy combinations will be discussed in detail. Our approach is based upon elucidating the signaling coordination within the phosphoprotein signaling pathways that are hyperactivated in human GBMs, and interrogating how that coordination responds to the perturbation of targeted inhibitor. Strongly coupled protein-protein interactions constitute most signaling cascades. A physical analogy of such a system is the strongly coupled atom-atom interactions in a crystal lattice. Similar to decomposing the atomic interactions into a series of independent normal vibrational modes, a simplified picture of signaling network coordination can also be achieved by diagonalizing protein-protein correlation or covariance matrices to decompose the pairwise correlative interactions into a set of distinct linear combinations of signaling proteins (i.e. independent signaling modes). By doing so, two independent signaling modes – one associated with mTOR signaling and a second associated with ERK/Src signaling have been resolved, which in turn allow us to anticipate resistance, and to design combination therapies that are effective, as well as identify those therapies and therapy combinations that will be ineffective. We validated our predictions in mouse tumor models and all predictions were borne out.

In the last part, some preliminary results about the clinical translation of single-cell proteomics chips will be presented. The successful demonstration of our work on human-derived xenografts provides the rationale to extend our current work into the clinic. It will enable us to interrogate GBM tumor samples in a way that could potentially yield a straightforward, rapid interpretation so that we can give therapeutic guidance to the attending physicians within a clinical relevant time scale. The technical challenges of the clinical translation will be presented and our solutions to address the challenges will be discussed as well. A clinical case study will then follow, where some preliminary data collected from a pediatric GBM patient bearing an EGFR amplified tumor will be presented to demonstrate the general protocol and the workflow of the proposed clinical studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Slender rotating structures are used in many mechanical systems. These structures can suffer from undesired vibrations that can affect the components and safety of a system. Furthermore, since some these structures can operate in a harsh environment, installation and operation of sensors that are needed for closed-loop and collocated control schemes may not be feasible. Hence, the need for an open-loop non-collocated scheme for control of the dynamics of these structures. In this work, the effects of drive speed modulation on the dynamics of slender rotating structures are studied. Slender rotating structures are a type of mechanical rotating structures, whose length to diameter ratio is large. For these structures, the torsion mode natural frequencies can be low. In particular, for isotropic structures, the first few torsion mode frequencies can be of the same order as the first few bending mode frequencies. These situations can be conducive for energy transfer amongst bending and torsion modes. Scenarios with torsional vibrations experienced by rotating structures with continuous rotor-stator contact occur in many rotating mechanical systems. Drill strings used in the oil and gas industry are an example of rotating structures whose torsional vibrations can be deleterious to the components of the drilling system. As a novel approach to mitigate undesired vibrations, the effects of adding a sinusoidal excitation to the rotation speed of a drill string are studied. A portion of the drill string located within a borewell is considered and this rotating structure has been modeled as an extended Jeffcott rotor and a sinusoidal excitation has been added to the drive speed of the rotor. After constructing a three-degree-of-freedom model to capture lateral and torsional motions, the equations of motions are reduced to a single differential equation governing torsional vibrations during continuous stator contact. An approximate solution has been obtained by making use of the Method of Direct Partition of Motions with the governing torsional equation of motion. The results showed that for a rotor undergoing forward or backward whirling, the addition of sinusoidal excitation to the drive speed can cause an increase in the equivalent torsional stiffness, smooth the discontinuous friction force at contact, and reduce the regions of negative slope in the friction coefficient variation with respect to speed. Experiments with a scaled drill string apparatus have also been conducted and the experimental results show good agreement with the numerical results obtained from the developed models. These findings suggest that the extended Jeffcott rotordynamics model can be useful for studies of rotor dynamics in situations with continuous rotor-stator contact. Furthermore, the results obtained suggest that the drive speed modulation scheme can have value for attenuating drill-string vibrations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aiming to obtain empirical models for the estimation of Syrah leaf area a set of 210 fruiting shoots was randomly collected during the 2013 growing season in an adult experimental vineyard, located in Lisbon, Portugal. Samples of 30 fruiting shoots were taken periodically from the stage of inflorescences visible to veraison (7 sampling dates). At the lab, from each shoot, primary and lateral leaves were separated and numbered according to node insertion. For each leaf, the length of the central and lateral veins was recorded and then the leaf area was measured by a leaf area meter. For single leaf area estimation the best statistical models uses as explanatory variable the sum of the lengths of the two lateral leaf veins. For the estimation of leaf area per shoot it was followed the approach of Lopes & Pinto (2005), based on 3 explanatory variables: number of primary leaves and area of the largest and smallest leaves. The best statistical model for estimation of primary leaf area per shoot uses a calculated variable obtained from the average of the largest and smallest primary leaf area multiplied by the number of primary leaves. For lateral leaf area estimation another model using the same type of calculated variable is also presented. All models explain a very high proportion of variability in leaf area. Our results confirm the already reported strong importance of the three measured variables (number of leaves and area of the largest and smallest leaf) as predictors of the shoot leaf area. The proposed models can be used to accurately predict Syrah primary and secondary leaf area per shoot in any phase of the growing cycle. They are inexpensive, practical, non-destructive methods which do not require specialized staff or expensive equipment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Obnoxious single facility location models are models that have the aim to find the best location for an undesired facility. Undesired is usually expressed in relation to the so-called demand points that represent locations hindered by the facility. Because obnoxious facility location models as a rule are multimodal, the standard techniques of convex analysis used for locating desirable facilities in the plane may be trapped in local optima instead of the desired global optimum. It is assumed that having more optima coincides with being harder to solve. In this thesis the multimodality of obnoxious single facility location models is investigated in order to know which models are challenging problems in facility location problems and which are suitable for site selection. Selected for this are the obnoxious facility models that appear to be most important in literature. These are the maximin model, that maximizes the minimum distance from demand point to the obnoxious facility, the maxisum model, that maximizes the sum of distance from the demand points to the facility and the minisum model, that minimizes the sum of damage of the facility to the demand points. All models are measured with the Euclidean distances and some models also with the rectilinear distance metric. Furthermore a suitable algorithm is selected for testing multimodality. Of the tested algorithms in this thesis, Multistart is most appropriate. A small numerical experiment shows that Maximin models have on average the most optima, of which the model locating an obnoxious linesegment has the most. Maximin models have few optima and are thus not very hard to solve. From the Minisum models, the models that have the most optima are models that take wind into account. In general can be said that the generic models have less optima than the weighted versions. Models that are measured with the rectilinear norm do have more solutions than the same models measured with the Euclidean norm. This can be explained for the maximin models in the numerical example because the shape of the norm coincides with a bound of the feasible area, so not all solutions are different optima. The difference found in number of optima of the Maxisum and Minisum can not be explained by this phenomenon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we present a mathematical formulation of the interaction between microorganisms such as bacteria or amoebae and chemicals, often produced by the organisms themselves. This interaction is called chemotaxis and leads to cellular aggregation. We derive some models to describe chemotaxis. The first is the pioneristic Keller-Segel parabolic-parabolic model and it is derived by two different frameworks: a macroscopic perspective and a microscopic perspective, in which we start with a stochastic differential equation and we perform a mean-field approximation. This parabolic model may be generalized by the introduction of a degenerate diffusion parameter, which depends on the density itself via a power law. Then we derive a model for chemotaxis based on Cattaneo's law of heat propagation with finite speed, which is a hyperbolic model. The last model proposed here is a hydrodynamic model, which takes into account the inertia of the system by a friction force. In the limit of strong friction, the model reduces to the parabolic model, whereas in the limit of weak friction, we recover a hyperbolic model. Finally, we analyze the instability condition, which is the condition that leads to aggregation, and we describe the different kinds of aggregates we may obtain: the parabolic models lead to clusters or peaks whereas the hyperbolic models lead to the formation of network patterns or filaments. Moreover, we discuss the analogy between bacterial colonies and self gravitating systems by comparing the chemotactic collapse and the gravitational collapse (Jeans instability).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Suppose two or more variables are jointly normally distributed. If there is a common relationship between these variables it would be very important to quantify this relationship by a parameter called the correlation coefficient which measures its strength, and the use of it can develop an equation for predicting, and ultimately draw testable conclusion about the parent population. This research focused on the correlation coefficient ρ for the bivariate and trivariate normal distribution when equal variances and equal covariances are considered. Particularly, we derived the maximum Likelihood Estimators (MLE) of the distribution parameters assuming all of them are unknown, and we studied the properties and asymptotic distribution of . Showing this asymptotic normality, we were able to construct confidence intervals of the correlation coefficient ρ and test hypothesis about ρ. With a series of simulations, the performance of our new estimators were studied and were compared with those estimators that already exist in the literature. The results indicated that the MLE has a better or similar performance than the others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce quantum sensing schemes for measuring very weak forces with a single trapped ion. They use the spin-motional coupling induced by the laser-ion interaction to transfer the relevant force information to the spin-degree of freedom. Therefore, the force estimation is carried out simply by observing the Ramsey-type oscillations of the ion spin states. Three quantum probes are considered, which are represented by systems obeying the Jaynes-Cummings, quantum Rabi (in 1D) and Jahn-Teller (in 2D) models. By using dynamical decoupling schemes in the Jaynes-Cummings and Jahn-Teller models, our force sensing protocols can be made robust to the spin dephasing caused by the thermal and magnetic field fluctuations. In the quantum-Rabi probe, the residual spin-phonon coupling vanishes, which makes this sensing protocol naturally robust to thermally-induced spin dephasing. We show that the proposed techniques can be used to sense the axial and transverse components of the force with a sensitivity beyond the yN/\wurzel{Hz}range, i.e. in the xN/\wurzel{Hz}(xennonewton, 10^−27). The Jahn-Teller protocol, in particular, can be used to implement a two-channel vector spectrum analyzer for measuring ultra-low voltages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : Recently, there is a great interest to study the flow characteristics of suspensions in different environmental and industrial applications, such as snow avalanches, debris flows, hydrotransport systems, and material casting processes. Regarding rheological aspects, the majority of these suspensions, such as fresh concrete, behave mostly as non-Newtonian fluids. Concrete is the most widely used construction material in the world. Due to the limitations that exist in terms of workability and formwork filling abilities of normal concrete, a new class of concrete that is able to flow under its own weight, especially through narrow gaps in the congested areas of the formwork was developed. Accordingly, self-consolidating concrete (SCC) is a novel construction material that is gaining market acceptance in various applications. Higher fluidity characteristics of SCC enable it to be used in a number of special applications, such as densely reinforced sections. However, higher flowability of SCC makes it more sensitive to segregation of coarse particles during flow (i.e., dynamic segregation) and thereafter at rest (i.e., static segregation). Dynamic segregation can increase when SCC flows over a long distance or in the presence of obstacles. Therefore, there is always a need to establish a trade-off between the flowability, passing ability, and stability properties of SCC suspensions. This should be taken into consideration to design the casting process and the mixture proportioning of SCC. This is called “workability design” of SCC. An efficient and non-expensive workability design approach consists of the prediction and optimization of the workability of the concrete mixtures for the selected construction processes, such as transportation, pumping, casting, compaction, and finishing. Indeed, the mixture proportioning of SCC should ensure the construction quality demands, such as demanded levels of flowability, passing ability, filling ability, and stability (dynamic and static). This is necessary to develop some theoretical tools to assess under what conditions the construction quality demands are satisfied. Accordingly, this thesis is dedicated to carry out analytical and numerical simulations to predict flow performance of SCC under different casting processes, such as pumping and tremie applications, or casting using buckets. The L-Box and T-Box set-ups can evaluate flow performance properties of SCC (e.g., flowability, passing ability, filling ability, shear-induced and gravitational dynamic segregation) in casting process of wall and beam elements. The specific objective of the study consists of relating numerical results of flow simulation of SCC in L-Box and T-Box test set-ups, reported in this thesis, to the flow performance properties of SCC during casting. Accordingly, the SCC is modeled as a heterogeneous material. Furthermore, an analytical model is proposed to predict flow performance of SCC in L-Box set-up using the Dam Break Theory. On the other hand, results of the numerical simulation of SCC casting in a reinforced beam are verified by experimental free surface profiles. The results of numerical simulations of SCC casting (modeled as a single homogeneous fluid), are used to determine the critical zones corresponding to the higher risks of segregation and blocking. The effects of rheological parameters, density, particle contents, distribution of reinforcing bars, and particle-bar interactions on flow performance of SCC are evaluated using CFD simulations of SCC flow in L-Box and T-box test set-ups (modeled as a heterogeneous material). Two new approaches are proposed to classify the SCC mixtures based on filling ability and performability properties, as a contribution of flowability, passing ability, and dynamic stability of SCC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the Iberian Variscides several first order arcuate structures have been considered. In spite of being highly studied their characterization, formation mechanisms and even existence is still debatable. Themain Ibero-Armorican Arc (IAA) is essentially defined by a predominantNW–SE trend in the Iberian branch and an E–Wtrend in the Brittany one. However, in northern Spain it presents a 180° rotation, sometimes known as the Cantabrian Arc (CA). The relation between both arcs is controversial, being considered either as a single arc due to one tectonic event, or as the result of a polyphasic process. According to the last assumption, there is a later arcuate structure (CA), overlapping a previousmajor one (IAA). Whatever themodels, they must be able to explain the presence of a Variscan sinistral transpression in Iberia and a dextral one in Armorica, and a deformation spanning from the Devonian to the Upper Carboniferous. Another arcuate structure, in continuity with the CA, the Central-Iberian Arc (CIA) was recently proposed mainly based upon on magnetic anomalies, geometry of major folds and Ordovician paleocurrents. The critical review of the structural, stratigraphic and geophysical data supports both the IAA and the CA, but as independent structures. However, the presence of a CIA is highly questionable and could not be supported. The complex strain pattern of the IAA and the CA could be explained by a Devonian — Carboniferous polyphasic indentation of a Gondwana promontory. In thismodel the CA is essentially a thin-skinned arc,while the IAA has a more complex and longer evolution that has led to a thick-skinned first order structure. Nevertheless, both arcs are essentially the result of a lithospheric bending process during the Iberian Variscides.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper it is proposed to obtain enhanced and more efficient parameters model from generalized five parameters (single diode) model of PV cells. The paper also introduces, describes and implements a seven parameter model for photovoltaic cell (PV cell) which includes two internal parameters and five external parameters. To obtain the model the mathematical equations and an equivalent circuit consisting of a photo generated current source, a series resistor, a shunt resistor and a diode is used. The fundamental equation of PV cell is used to analyse and best fit the observation data. Especially bisection iteration method is used to obtain the expected result and to understand the deviation of changes in different parameters situation at various conditions respectively. The produced model can be used of measuring and understanding the actions of photovoltaic cells for certain changes and parameters extraction. The effect is also studied with I-V and P-V characteristics of PV cells though it is a challenge to optimize the output with real time simulation. The working procedure is also discussed and an experiment presented to get the closure and insight about the produced model and to decide upon the model validity. At the end, we observed that the result of the simulation is very close to the produced model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Depth represents a crucial piece of information in many practical applications, such as obstacle avoidance and environment mapping. This information can be provided either by active sensors, such as LiDARs, or by passive devices like cameras. A popular passive device is the binocular rig, which allows triangulating the depth of the scene through two synchronized and aligned cameras. However, many devices that are already available in several infrastructures are monocular passive sensors, such as most of the surveillance cameras. The intrinsic ambiguity of the problem makes monocular depth estimation a challenging task. Nevertheless, the recent progress of deep learning strategies is paving the way towards a new class of algorithms able to handle this complexity. This work addresses many relevant topics related to the monocular depth estimation problem. It presents networks capable of predicting accurate depth values even on embedded devices and without the need of expensive ground-truth labels at training time. Moreover, it introduces strategies to estimate the uncertainty of these models, and it shows that monocular networks can easily generate training labels for different tasks at scale. Finally, it evaluates off-the-shelf monocular depth predictors for the relevant use case of social distance monitoring, and shows how this technology allows to overcome already existing strategies limitations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding why market manipulation is conducted, under which conditions it is the most profitable and investigating the magnitude of these practices are crucial questions for financial regulators. Closing price manipulation induced by derivatives’ expiration is the primary subject of this thesis. The first chapter provides a mathematical framework in continuous time to study the incentive to manipulate a set of securities induced by a derivative position. An agent holding a European-type contingent claim, depending on the price of a basket of underlying securities, is considered. The agent can affect the price of the underlying securities by trading on each of them before expiration. The elements of novelty are at least twofold: (1) a multi-asset market is considered; (2) the problem is solved by means of both classic optimisation and stochastic control techniques. Both linear and option payoffs are considered. In the second chapter an empirical investigation is conducted on the existence of expiration day effects on the UK equity market. Intraday data on FTSE 350 stocks over a six-year period from 2015-2020 are used. The results show that the expiration of index derivatives is associated with a rise in both trading activity and volatility, together with significant price distortions. The expiration of single stock options appears to have little to no impact on the underlying securities. The last chapter examines the existence of patterns in line with closing price manipulation of UK stocks on option expiration days. The main contributions are threefold: (1) this is one of the few empirical studies on manipulation induced by the options market; (2) proprietary equity orderbook and transaction data sets are used to define manipulation proxies, providing a more detailed analysis; (3) the behaviour of proprietary trading firms is studied. Despite the industry concerns, no evidence is found of this type of manipulative behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interpretation of phase equilibrium and mass transport phenomena in gas/solvent - polymer system at molten or glassy state is relevant in many industrial applications. Among tools available for the prediction of thermodynamics properties in these systems, at molten/rubbery state, is the group contribution lattice-fluid equation of state (GCLF-EoS), developed by Lee and Danner and ultimately based on Panayiotou and Vera LF theory. On the other side, a thermodynamic approach namely non-equilibrium lattice-fluid (NELF) was proposed by Doghieri and Sarti to consistently extend the description of thermodynamic properties of solute polymer systems obtained through a suitable equilibrium model to the case of non-equilibrium conditions below the glass transition temperature. The first objective of this work is to investigate the phase behaviour in solvent/polymer at glassy state by using NELF model and to develop a predictive tool for gas or vapor solubility that could be applied in several different applications: membrane gas separation, barrier materials for food packaging, polymer-based gas sensors and drug delivery devices. Within the efforts to develop a predictive tool of this kind, a revision of the group contribution method developed by High and Danner for the application of LF model by Panayiotou and Vera is considered, with reference to possible alternatives for the mixing rule for characteristic interaction energy between segments. The work also devotes efforts to the analysis of gas permeability in polymer composite materials as formed by a polymer matrix in which domains are dispersed of a second phase and attention is focused on relation for deviation from Maxwell law as function of arrangement, shape of dispersed domains and loading.