25 resultados para Analysis of multiple regression
em Universidad Politécnica de Madrid
Resumo:
In Operational Modal Analysis (OMA) of a structure, the data acquisition process may be repeated many times. In these cases, the analyst has several similar records for the modal analysis of the structure that have been obtained at di�erent time instants (multiple records). The solution obtained varies from one record to another, sometimes considerably. The differences are due to several reasons: statistical errors of estimation, changes in the external forces (unmeasured forces) that modify the output spectra, appearance of spurious modes, etc. Combining the results of the di�erent individual analysis is not straightforward. To solve the problem, we propose to make the joint estimation of the parameters using all the records. This can be done in a very simple way using state space models and computing the estimates by maximum-likelihood. The method provides a single result for the modal parameters that combines optimally all the records.
Resumo:
Computing the modal parameters of large structures in Operational Modal Analysis often requires to process data from multiple non simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors that are fixed for all the measurements, while the other sensors are moved from one setup to the next. One possibility is to process the setups separately what result in different modal parameter estimates for each setup. Then the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global modes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a state space model that can be used to process all setups at once so the global mode shapes are obtained automatically and subsequently only a value for the natural frequency and damping ratio of each mode is computed. We also present how this model can be estimated using maximum likelihood and the Expectation Maximization algorithm. We apply this technique to real data measured at a footbridge.
Resumo:
In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. In addition to recording TOD, the cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also identified for use as the independent variables in the regression analysis. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajectory parame- ters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowledge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace.
Resumo:
We show that for a tether at 800 km altitude, which is 5 km long, 2 cm wide and 0.05 mm thick, the risk of substantial damage during a 3 month period due to multiple impacts with debris or micrometeoroids is low, of about 1.4%. By substantial damage we mean that if the tape is divided in 2 cm2 cm squares, then in some square the damaged area by bombardment with debris or micrometeoroids exceeds 11% of the area of the square. Furthermore, we show that the danger posed by the micrometeoroids is negligible compared to the risk posed by the debris.
Resumo:
El MC en baloncesto es aquel fenómeno relacionado con el juego que presenta unas características particulares determinadas por la idiosincrasia de un equipo y puede afectar a los protagonistas y por ende al devenir del juego. En la presente Tesis se ha estudiado la incidencia del MC en Liga A.C.B. de baloncesto y para su desarrollo en profundidad se ha planteado dos investigaciones una cuantitativa y otra cualitativa cuya metodología se detalla a continuación: La investigación cuantitativa se ha basado en la técnica de estudio del “Performance analysis”, para ello se han estudiado cuatro temporadas de la Liga A.C.B. (del 2007/08 al 2010/11), tal y como refleja en la bibliografía consultada se han tomado como momentos críticos del juego a los últimos cinco minutos de partidos donde la diferencia de puntos fue de seis puntos y todos los Tiempos Extras disputados, de tal manera que se han estudiado 197 momentos críticos. La contextualización del estudio se ha hecho en función de la variables situacionales “game location” (local o visitante), “team quality” (mejores o peores clasificados) y “competition” (fases de LR y Playoff). Para la interpretación de los resultados se han realizado los siguientes análisis descriptivos: 1) Análisis Discriminante, 2) Regresión Lineal Múltiple; y 3) Análisis del Modelo Lineal General Multivariante. La investigación cualitativa se ha basado en la técnica de investigación de la entrevista semiestructurada. Se entrevistaron a 12 entrenadores que militaban en la Liga A.C.B. durante la temporada 2011/12, cuyo objetivo ha sido conocer el punto de vista que tiene el entrenador sobre el concepto del MC y que de esta forma pudiera dar un enfoque más práctico basado en su conocimiento y experiencia acerca de cómo actuar ante el MC en el baloncesto. Los resultados de ambas investigaciones coinciden en señalar la importancia del MC sobre el resultado final del juego. De igual forma, el concepto en sí entraña una gran complejidad por lo que se considera fundamental la visión científica de la observación del juego y la percepción subjetiva que presenta el entrenador ante el fenómeno, para la cual los aspectos psicológicos de sus protagonistas (jugadores y entrenadores) son determinantes. ABSTRACT The Critical Moment (CM) in basketball is a related phenomenon with the game that has particular features determined by the idiosyncrasies of a team and can affect the players and therefore the future of the game. In this Thesis we have studied the impact of CM in the A.C.B. League and from a profound development two investigations have been raised, quantitative and qualitative whose methodology is as follows: The quantitative research is based on the technique of study "Performance analysis", for this we have studied four seasons in the A.C.B. League (2007/08 to 2010/11), and as reflected in the literature the Critical Moments of the games were taken from the last five minutes of games where the point spread was six points and all overtimes disputed, such that 197 critical moments have been studied. The contextualization of the study has been based on the situational variables "game location" (home or away), "team quality" (better or lower classified) and "competition" (LR and Playoff phases). For the interpretation of the results the following descriptive analyzes were performed: 1) Discriminant Analysis, 2) Multiple Linear Regression Analysis; and 3) Analysis of Multivariate General Linear Model. Qualitative research is based on the technique of investigation of a semi-structured interview. 12 coaches who belonged to the A.C.B. League were interviewed in seasons 2011/12, which aimed to determine the point of view that the coach has on the CM concept and thus could give a more practical approach based on their knowledge and experience about how to deal with the CM in basketball. The results of both studies agree on the importance of the CM on the final outcome of the game. Similarly, the concept itself is highly complex so the scientific view of the observation of the game is considered essential as well as the subjective perception the coach presents before the phenomenon, for which the psychological aspects of their characters (players and coaches) are crucial.
Resumo:
Transportation infrastructure is known to affect the value of real estate property by virtue of changes in accessibility. The impact of transportation facilities is highly localized as well, and it is possible that spillover effects result from the capitalization of accessibility. The objective of this study was to review the theoretical background related to spatial hedonic models and the opportunities that they provided to evaluate the effect of new transportation infrastructure. An empirical case study is presented: the Madrid Metro Line 12, known as Metrosur, in the region of Madrid, Spain. The effect of proximity to metro stations on housing prices was evaluated. The analysis took into account a host of variables, including structure, location, and neighborhood and made use of three modeling approaches: linear regression estimation with ordinary least squares, spatial error, and spatial lag. The results indicated that better accessibility to Metrosur stations had a positive impact on real estate values and that the effect was marked in cases in which a house was for sale. The results also showed the presence of submarkets, which were well defined by geographic boundaries, and transport fares, which implied that the economic benefits differed across municipalities.
Resumo:
We study the múltiple specialization of logic programs based on abstract interpretation. This involves in general generating several versions of a program predícate for different uses of such predícate, making use of information obtained from global analysis performed by an abstract interpreter, and finally producing a new, "multiply specialized" program. While the topic of múltiple specialization of logic programs has received considerable theoretical attention, it has never been actually incorporated in a compiler and its effects quantified. We perform such a study in the context of a parallelizing compiler and show that it is indeed a relevant technique in practice. Also, we propose an implementation technique which has the same power as the strongest of the previously proposed techniques but requires little or no modification of an existing abstract interpreter.
Resumo:
Public Private Partnerships (PPPs) are mostly implemented for three reasons: to circumvent budgetary constraints, encourage efficiency and improvement of quality in the provision of public infrastructure. One of the ways of reaching the latter objective is by the introduction of performance-based standards tied to bonuses and penalties to reward or punish the performance of the contractor. These performance based standards often refer to different aspects such as technical, environmental and safety issues. This paper focuses on the implementation of safety based incentives in PPPs. The main aim of this paper is to analyze whether the incentives to improve road safety in PPPs are effective in improving safety ratios in Spain. To this end, negative binomial regression models have been applied using information from the Spanish high capacity network in 2006. The findings indicate that even though road safety is highly influenced by variables that are not much controllable by the contractor such as the Average Annual Daily Traffic and the percentage of heavy vehicles in the highway, the implementation of safety incentives in PPPs has a positive influence in the reduction of fatalities, injuries and accidents.
Resumo:
This article aims to quantify the efficiency of mobile operators in Spain and other European countries such as France and Germany. The period considered is from 2002 to 2008. Linear regression is used to analyze the relationship between growth in revenue and gross operating margin (EBITDA) generated by the relevant operators and the aggregate industry in each country. At the industry level, it is shown that (i) there is a strong correlation between revenue and margin; and (ii) this correlation weakens when competitive intensity grows. At the operator level, those which achieved larger increases in revenues did not sacrifice their margins, but offset the additional investments and costs required to achieve said growth through economies of scale.
Resumo:
La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.
Resumo:
Atrial fibrillation (AF) is a common heart disorder. One of the most prominent hypothesis about its initiation and maintenance considers multiple uncoordinated activation foci inside the atrium. However, the implicit assumption behind all the signal processing techniques used for AF, such as dominant frequency and organization analysis, is the existence of a single regular component in the observed signals. In this paper we take into account the existence of multiple foci, performing a spectral analysis to detect their number and frequencies. In order to obtain a cleaner signal on which the spectral analysis can be performed, we introduce sparsity-aware learning techniques to infer the spike trains corresponding to the activations. The good performance of the proposed algorithm is demonstrated both on synthetic and real data. RESUMEN. Algoritmo basado en técnicas de regresión dispersa para la extracción de las señales cardiacas en pacientes con fibrilación atrial (AF).
Resumo:
Nitrous oxide emissions from a network of agricultural experiments in Europe were used to explore the relative importance of site and management controls of emissions. At each site, a selection of management interventions were compared within replicated experimental designs in plot-based experiments. Arable experiments were conducted at Beano in Italy, El Encin in Spain, Foulum in Denmark, Logarden in Sweden, Maulde in Belgium CE1, Paulinenaue in Germany, and Tulloch in the UK. Grassland experiments were conducted at Crichton, Nafferton and Peaknaze in the UK, Godollo in Hungary, Rzecin in Poland, Zarnekow in Germany and Theix in France. Nitrous oxide emissions were measured at each site over a period of at least two years using static chambers. Emissions varied widely between sites and as a result of manipulation treatments. Average site emissions (throughout the study period) varied between 0.04 and 21.21 kg N2O-N ha−1yr−1, with the largest fluxes and variability associated with the grassland sites. Total nitrogen addition was found to be the single most important deter- minant of emissions, accounting for 15 % of the variance (using linear regression) in the data from the arable sites (p<0.0001), and 77 % in the grassland sites. The annual emissions from arable sites were significantly greater than those that would be predicted by IPCC default emission fac- tors. Variability of N2O emissions within sites that occurred as a result of manipulation treatments was greater than that resulting from site-to-site and year-to-year variation, highlighting the importance of management interventions in contributing to greenhouse gas mitigation
Resumo:
Public Private Partnerships (PPPs) are mostly implemented for three reasons: to circumvent budgetary constraints, encourage efficiency and improvement of quality in the provision of public infrastructure. One of the ways of reaching the latter objective is by the introduction of performance-based standards tied to bonuses and penalties to reward or punish the performance of the contractor. These performance based standards often refer to different aspects such as technical, environmental and safety issues. This paper focuses on the implementation of safety based incentives in PPPs. The main aim of this paper is to analyze whether the incentives to improve road safety in PPPs are effective in improving safety ratios in Spain. To this end, negative binomial regression models have been applied using information from the Spanish high capacity network in 2006. The findings indicate that even though road safety is highly influenced by variables that are not much controllable by the contractor such as the Average Annual Daily Traffic and the percentage of heavy vehicles in the highway, the implementation of safety incentives in PPPs has a positive influence in the reduction of fatalities, injuries and accidents.
Resumo:
A comprehensive assessment of the liquidity development in the Iberian power futures market managed by OMIP (“Operador do Mercado Ibérico de Energia, Pólo Português”) in its first 4 years of existence is performed. This market started on July 2006. A regression model tracking the evolution of the traded volumes in the continuous market is built as a function of 12 potential liquidity drivers. The only significant drivers are the traded volumes in OMIP compulsory auctions, the traded volumes in the “Over The Counter” (OTC) market, and the OTC cleared volumes in OMIP clearing house (OMIClear). Furthermore, the enrollment of financial members shows strong correlation with the traded volumes in the continuous market. OMIP liquidity is still far from the levels reached by the most mature European markets (Nord Pool and EEX). The market operator and its clearing house could develop efficient marketing actions to attract new entrants active in the spot market (energy intensive industries, suppliers, and small producers) as well as volumes from the opaque OTC market, and to improve the performance of existing illiquid products. An active dialogue with all the stakeholders (market participants, spot market operator, and supervisory authorities) will help to implement such actions.
Resumo:
This paper analyses the relationship between productive efficiency and online-social-networks (OSN) in Spanish telecommunications firms. A data-envelopment-analysis (DEA) is used and several indicators of business ?social Media? activities are incorporated. A super-efficiency analysis and bootstrapping techniques are performed to increase the model?s robustness and accuracy. Then, a logistic regression model is applied to characterise factors and drivers of good performance in OSN. Results reveal the company?s ability to absorb and utilise OSNs as a key factor in improving the productive efficiency. This paper presents a model for assessing the strategic performance of the presence and activity in OSN.