951 resultados para Building demand estimation model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent decades, full electric and hybrid electric vehicles have emerged as an alternative to conventional cars due to a range of factors, including environmental and economic aspects. These vehicles are the result of considerable efforts to seek ways of reducing the use of fossil fuel for vehicle propulsion. Sophisticated technologies such as hybrid and electric powertrains require careful study and optimization. Mathematical models play a key role at this point. Currently, many advanced mathematical analysis tools, as well as computer applications have been built for vehicle simulation purposes. Given the great interest of hybrid and electric powertrains, along with the increasing importance of reliable computer-based models, the author decided to integrate both aspects in the research purpose of this work. Furthermore, this is one of the first final degree projects held at the ETSII (Higher Technical School of Industrial Engineers) that covers the study of hybrid and electric propulsion systems. The present project is based on MBS3D 2.0, a specialized software for the dynamic simulation of multibody systems developed at the UPM Institute of Automobile Research (INSIA). Automobiles are a clear example of complex multibody systems, which are present in nearly every field of engineering. The work presented here benefits from the availability of MBS3D software. This program has proven to be a very efficient tool, with a highly developed underlying mathematical formulation. On this basis, the focus of this project is the extension of MBS3D features in order to be able to perform dynamic simulations of hybrid and electric vehicle models. This requires the joint simulation of the mechanical model of the vehicle, together with the model of the hybrid or electric powertrain. These sub-models belong to completely different physical domains. In fact the powertrain consists of energy storage systems, electrical machines and power electronics, connected to purely mechanical components (wheels, suspension, transmission, clutch…). The challenge today is to create a global vehicle model that is valid for computer simulation. Therefore, the main goal of this project is to apply co-simulation methodologies to a comprehensive model of an electric vehicle, where sub-models from different areas of engineering are coupled. The created electric vehicle (EV) model consists of a separately excited DC electric motor, a Li-ion battery pack, a DC/DC chopper converter and a multibody vehicle model. Co-simulation techniques allow car designers to simulate complex vehicle architectures and behaviors, which are usually difficult to implement in a real environment due to safety and/or economic reasons. In addition, multi-domain computational models help to detect the effects of different driving patterns and parameters and improve the models in a fast and effective way. Automotive designers can greatly benefit from a multidisciplinary approach of new hybrid and electric vehicles. In this case, the global electric vehicle model includes an electrical subsystem and a mechanical subsystem. The electrical subsystem consists of three basic components: electric motor, battery pack and power converter. A modular representation is used for building the dynamic model of the vehicle drivetrain. This means that every component of the drivetrain (submodule) is modeled separately and has its own general dynamic model, with clearly defined inputs and outputs. Then, all the particular submodules are assembled according to the drivetrain configuration and, in this way, the power flow across the components is completely determined. Dynamic models of electrical components are often based on equivalent circuits, where Kirchhoff’s voltage and current laws are applied to draw the algebraic and differential equations. Here, Randles circuit is used for dynamic modeling of the battery and the electric motor is modeled through the analysis of the equivalent circuit of a separately excited DC motor, where the power converter is included. The mechanical subsystem is defined by MBS3D equations. These equations consider the position, velocity and acceleration of all the bodies comprising the vehicle multibody system. MBS3D 2.0 is entirely written in MATLAB and the structure of the program has been thoroughly studied and understood by the author. MBS3D software is adapted according to the requirements of the applied co-simulation method. Some of the core functions are modified, such as integrator and graphics, and several auxiliary functions are added in order to compute the mathematical model of the electrical components. By coupling and co-simulating both subsystems, it is possible to evaluate the dynamic interaction among all the components of the drivetrain. ‘Tight-coupling’ method is used to cosimulate the sub-models. This approach integrates all subsystems simultaneously and the results of the integration are exchanged by function-call. This means that the integration is done jointly for the mechanical and the electrical subsystem, under a single integrator and then, the speed of integration is determined by the slower subsystem. Simulations are then used to show the performance of the developed EV model. However, this project focuses more on the validation of the computational and mathematical tool for electric and hybrid vehicle simulation. For this purpose, a detailed study and comparison of different integrators within the MATLAB environment is done. Consequently, the main efforts are directed towards the implementation of co-simulation techniques in MBS3D software. In this regard, it is not intended to create an extremely precise EV model in terms of real vehicle performance, although an acceptable level of accuracy is achieved. The gap between the EV model and the real system is filled, in a way, by introducing the gas and brake pedals input, which reflects the actual driver behavior. This input is included directly in the differential equations of the model, and determines the amount of current provided to the electric motor. For a separately excited DC motor, the rotor current is proportional to the traction torque delivered to the car wheels. Therefore, as it occurs in the case of real vehicle models, the propulsion torque in the mathematical model is controlled through acceleration and brake pedal commands. The designed transmission system also includes a reduction gear that adapts the torque coming for the motor drive and transfers it. The main contribution of this project is, therefore, the implementation of a new calculation path for the wheel torques, based on performance characteristics and outputs of the electric powertrain model. Originally, the wheel traction and braking torques were input to MBS3D through a vector directly computed by the user in a MATLAB script. Now, they are calculated as a function of the motor current which, in turn, depends on the current provided by the battery pack across the DC/DC chopper converter. The motor and battery currents and voltages are the solutions of the electrical ODE (Ordinary Differential Equation) system coupled to the multibody system. Simultaneously, the outputs of MBS3D model are the position, velocity and acceleration of the vehicle at all times. The motor shaft speed is computed from the output vehicle speed considering the wheel radius, the gear reduction ratio and the transmission efficiency. This motor shaft speed, somehow available from MBS3D model, is then introduced in the differential equations corresponding to the electrical subsystem. In this way, MBS3D and the electrical powertrain model are interconnected and both subsystems exchange values resulting as expected with tight-coupling approach.When programming mathematical models of complex systems, code optimization is a key step in the process. A way to improve the overall performance of the integration, making use of C/C++ as an alternative programming language, is described and implemented. Although this entails a higher computational burden, it leads to important advantages regarding cosimulation speed and stability. In order to do this, it is necessary to integrate MATLAB with another integrated development environment (IDE), where C/C++ code can be generated and executed. In this project, C/C++ files are programmed in Microsoft Visual Studio and the interface between both IDEs is created by building C/C++ MEX file functions. These programs contain functions or subroutines that can be dynamically linked and executed from MATLAB. This process achieves reductions in simulation time up to two orders of magnitude. The tests performed with different integrators, also reveal the stiff character of the differential equations corresponding to the electrical subsystem, and allow the improvement of the cosimulation process. When varying the parameters of the integration and/or the initial conditions of the problem, the solutions of the system of equations show better dynamic response and stability, depending on the integrator used. Several integrators, with variable and non-variable step-size, and for stiff and non-stiff problems are applied to the coupled ODE system. Then, the results are analyzed, compared and discussed. From all the above, the project can be divided into four main parts: 1. Creation of the equation-based electric vehicle model; 2. Programming, simulation and adjustment of the electric vehicle model; 3. Application of co-simulation methodologies to MBS3D and the electric powertrain subsystem; and 4. Code optimization and study of different integrators. Additionally, in order to deeply understand the context of the project, the first chapters include an introduction to basic vehicle dynamics, current classification of hybrid and electric vehicles and an explanation of the involved technologies such as brake energy regeneration, electric and non-electric propulsion systems for EVs and HEVs (hybrid electric vehicles) and their control strategies. Later, the problem of dynamic modeling of hybrid and electric vehicles is discussed. The integrated development environment and the simulation tool are also briefly described. The core chapters include an explanation of the major co-simulation methodologies and how they have been programmed and applied to the electric powertrain model together with the multibody system dynamic model. Finally, the last chapters summarize the main results and conclusions of the project and propose further research topics. In conclusion, co-simulation methodologies are applicable within the integrated development environments MATLAB and Visual Studio, and the simulation tool MBS3D 2.0, where equation-based models of multidisciplinary subsystems, consisting of mechanical and electrical components, are coupled and integrated in a very efficient way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este proyecto se incluye en una línea de trabajo que tiene como objetivo final la optimización de la energía consumida por un dispositivo portátil multimedia mediante la aplicación de técnicas de control realimentado, a partir de una modificación dinámica de la frecuencia de trabajo del procesador y de su tensión de alimentación. La modificación de frecuencia y tensión se realiza a partir de la información de realimentación acerca de la potencia consumida por el dispositivo, lo que supone un problema ya que no suele ser posible la monitorización del consumo de potencia en dispositivos de estas características. Este es el motivo por el que se recurre a la estimación del consumo de potencia, utilizando para ello un modelo de predicción. A partir del número de veces que se producen ciertos eventos en el procesador del dispositivo, el modelo de predicción es capaz de obtener una estimación de la potencia consumida por dicho dispositivo. El trabajo llevado a cabo en este proyecto se centra en la implementación de un modelo de estimación de potencia en el kernel de Linux. La razón por la que la estimación se implementa en el sistema operativo es, en primer lugar para lograr un acceso directo a los contadores del procesador. En segundo lugar, para facilitar la modificación de frecuencia y tensión, una vez obtenida la estimación de potencia, ya que esta también se realiza desde el sistema operativo. Otro motivo para implementar la estimación en el sistema operativo, es que la estimación debe ser independiente de las aplicaciones de usuario. Además, el proceso de estimación se realiza de forma periódica, lo que sería difícil de lograr si no se trabajase desde el sistema operativo. Es imprescindible que la estimación se haga de forma periódica ya que al ser dinámica la modificación de frecuencia y tensión que se pretende implementar, se necesita conocer el consumo de potencia del dispositivo en todo momento. Cabe destacar también, que los algoritmos de control se tienen que diseñar sobre un patrón periódico de actuación. El modelo de estimación de potencia funciona de manera específica para el perfil de consumo generado por una única aplicación determinada, que en este caso es un decodificador de vídeo. Sin embargo, es necesario que funcione de la forma más precisa posible para cada una de las frecuencias de trabajo del procesador, y para el mayor número posible de secuencias de vídeo. Esto es debido a que las sucesivas estimaciones de potencia se pretenden utilizar para llevar a cabo la modificación dinámica de frecuencia, por lo que el modelo debe ser capaz de continuar realizando las estimaciones independientemente de la frecuencia con la que esté trabajando el dispositivo. Para valorar la precisión del modelo de estimación se toman medidas de la potencia consumida por el dispositivo a las distintas frecuencias de trabajo durante la ejecución del decodificador de vídeo. Estas medidas se comparan con las estimaciones de potencia obtenidas durante esas mismas ejecuciones, obteniendo de esta forma el error de predicción cometido por el modelo y realizando las modificaciones y ajustes oportunos en el mismo. ABSTRACT. This project is included in a work line which tries to optimize consumption of handheld multimedia devices by the application of feedback control techniques, from a dynamic modification of the processor work frequency and its voltage. The frequency and voltage modification is performed depending on the feedback information about the device power consumption. This is a problem because normally it is not possible to monitor the power consumption on this kind of devices. This is the reason why a power consumption estimation is used instead, which is obtained from a prediction model. Using the number of times some events occur on the device processor, the prediction model is able to obtain a power consumption estimation of this device. The work done in this project focuses on the implementation of a power estimation model in the Linux kernel. The main reason to implement the estimation in the operating system is to achieve a direct access to the processor counters. The second reason is to facilitate the frequency and voltage modification, because this modification is also done from the operating system. Another reason to implement the estimation in the operating system is because the estimation must be done apart of the user applications. Moreover, the estimation process is done periodically, what is difficult to obtain outside the operating system. It is necessary to make the estimation in a periodic way because the frequency and voltage modification is going to be dynamic, so it needs to know the device power consumption at every time. Also, it is important to say that the control algorithms have to be designed over a periodic pattern of action. The power estimation model works specifically for the consumption profile generated by a single application, which in this case is a video decoder. Nevertheless, it is necessary that the model works as accurate as possible for each frequency available on the processor, and for the greatest number of video sequences. This is because the power estimations are going to be used to modify dynamically the frequency, so the model must be able to work independently of the device frequency. To value the estimation model precision, some measurements of the device power consumption are taken at different frequencies during the video decoder execution. These measurements are compared with the power estimations obtained during that execution, getting the prediction error committed by the model, and if it is necessary, making modifications and settings on this model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation investigates China’s recent shift in its climate change policy with a refined discourse approach. Methodologically, by adopting a neo-Gramscian notion of hegemony, a generative definition of discourse and an ontological pluralist position, the study constructs a theoretical framework named “discursive hegemony” that identifies the “social forces” for enabling social change and focuses on the role of discursive mechanisms via which the forces operate and produce effects. The key empirical finding of this study was that it was a co-evolution of conditions that shaped the outcome as China’s climate policy shift. In examining the case, a before-after within-case comparison was designed to analyze the variations in the material, institutional, and ideational conditions, with methods including interviews, conventional narrative/text analysis and descriptive statistics. Specifically, changes in energy use, the structure of decision-making body, and the narratives about sustainable development reflected how the above three types of social force processed in China in the first few years of the 21st century, causing the economic development agenda to absorb the climate issue, and turning the policy frame for the latter from mainly a diplomatic matter to a potential opportunity for better-quality growth. With the discursive operation of the “Science-based development”, China’s energy policy has been a good example of the Chinese understanding of sustainability characterized by economic primacy, ecological viability and social green-engineering. This way of discursive evolution, however, is a double-edged sword that has pushed forward some fast, top-down mitigation measures on the one hand, but has also created and will likely continue creating social and ecological havoc on the other hand. The study makes two major contributions. First and on the empirical level, because China is an international actor that was not expected to cooperate on the climate issue according to major IR theories, this study would add one critical case to the studies on global (environmental) governance and the ideational approach in the IR discipline. Second and on the theory-building level, the model of discursive hegemony can be a causally deeper mode of explanation because it traces the process of co-evolution of social forces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The potential for the use of DEA and simulation in a mutually supporting role in guiding operating units to improved performance is presented. An analysis following a three-stage process is suggested. Stage one involves obtaining the data for the DEA analysis. This can be sourced from historical data, simulated data or a combination of the two. Stage two involves the DEA analysis that identifies benchmark operating units. In the third stage simulation can now be used in order to offer practical guidance to operating units towards improved performance. This can be achieved by the use of sensitivity analysis of the benchmark unit using a simulation model to offer direct support as to the feasibility and efficiency of any variations in operating practices to be tested. Alternatively, the simulation can be used as a mechanism to transmit the practices of the benchmark unit to weaker performing units by building a simulation model of the weaker unit to the process design of the benchmark unit. The model can then compare performance of the current and benchmark process designs. Quantifying improvement in this way provides a useful driver to any process change initiative that is required to bring the performance of weaker units up to the best in class. © 2005 Operational Research Society Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retrospective clinical data presents many challenges for data mining and machine learning. The transcription of patient records from paper charts and subsequent manipulation of data often results in high volumes of noise as well as a loss of other important information. In addition, such datasets often fail to represent expert medical knowledge and reasoning in any explicit manner. In this research we describe applying data mining methods to retrospective clinical data to build a prediction model for asthma exacerbation severity for pediatric patients in the emergency department. Difficulties in building such a model forced us to investigate alternative strategies for analyzing and processing retrospective data. This paper describes this process together with an approach to mining retrospective clinical data by incorporating formalized external expert knowledge (secondary knowledge sources) into the classification task. This knowledge is used to partition the data into a number of coherent sets, where each set is explicitly described in terms of the secondary knowledge source. Instances from each set are then classified in a manner appropriate for the characteristics of the particular set. We present our methodology and outline a set of experiential results that demonstrate some advantages and some limitations of our approach. © 2008 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis will report details of two studies conducted within the National Health Service in the UK that examined the association between HRM practices related to training and appraisal with health outcomes within NHS Trusts. Study one represents the organisational analysis of 61 NHS Trusts, and will report training and appraisal practices were significantly associated with lower patient mortality. Specifically, the research will show significantly lower patient mortality within NHS Trusts that: a) had achieved Investors in People accreditation; b) had a formal strategy document relating to training; c) had tailored training policy documents across occupational groups; d) had integrated training and appraisal practices; e) had a high percentage of staff receiving either an appraisal or updated personal development plan. There was also evidence of an additive effect where NHS Trusts that displayed more of these characteristics had significantly lower patient mortality. Study one in this thesis will also report significantly lower patient mortality within the NHS Trusts where there was broad level representation for the HR function. Study two will report details of a study conducted to examine the potential reasons why HR practices may be related to hospital performance. Details are given of the results of a staff attitudinal survey within five NHS Trusts. This study examined will show that a range of developmental activity, the favourability of the immediate work environment (in relation to social support and role stressors) and motivational outcomes are important antecedents to citizenship behaviours. Furthermore, the thesis will report that principles of the demand-control model were adopted to examine the relationship between workplace support and role stressors, and workplace support, influence, and an understanding of role expectation help mitigate against the negative effects of work demands upon motivational outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Theoretical and empirical studies show that deindustrialisation, broadly observed in developed countries, is an inherent part of the economic development pattern. However, post-communist countries, while being only middle-income economies, have also experienced deindustrialisation. Building on the model developed by Rowthorn and Wells (1987) we explain this phenomenon and show that there is a strong negative relationship between the magnitude of deindustrialisation and the efficiency and consistency of market reforms. We also demonstrate that reforms of the agricultural sector play a significant role in placing a transition country on a development path that guarantees convergence to EU employment structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational Fluid Dynamics (CFD) has found great acceptance among the engineering community as a tool for research and design of processes that are practically difficult or expensive to study experimentally. One of these processes is the biomass gasification in a Circulating Fluidized Bed (CFB). Biomass gasification is the thermo-chemical conversion of biomass at a high temperature and a controlled oxygen amount into fuel gas, also sometime referred to as syngas. Circulating fluidized bed is a type of reactor in which it is possible to maintain a stable and continuous circulation of solids in a gas-solid system. The main objectives of this thesis are four folds: (i) Develop a three-dimensional predictive model of biomass gasification in a CFB riser using advanced Computational Fluid Dynamic (CFD) (ii) Experimentally validate the developed hydrodynamic model using conventional and advanced measuring techniques (iii) Study the complex hydrodynamics, heat transfer and reaction kinetics through modelling and simulation (iv) Study the CFB gasifier performance through parametric analysis and identify the optimum operating condition to maximize the product gas quality. Two different and complimentary experimental techniques were used to validate the hydrodynamic model, namely pressure measurement and particle tracking. The pressure measurement is a very common and widely used technique in fluidized bed studies, while, particle tracking using PEPT, which was originally developed for medical imaging, is a relatively new technique in the engineering field. It is relatively expensive and only available at few research centres around the world. This study started with a simple poly-dispersed single solid phase then moved to binary solid phases. The single solid phase was used for primary validations and eliminating unnecessary options and steps in building the hydrodynamic model. Then the outcomes from the primary validations were applied to the secondary validations of the binary mixture to avoid time consuming computations. Studies on binary solid mixture hydrodynamics is rarely reported in the literature. In this study the binary solid mixture was modelled and validated using experimental data from the both techniques mentioned above. Good agreement was achieved with the both techniques. According to the general gasification steps the developed model has been separated into three main gasification stages; drying, devolatilization and tar cracking, and partial combustion and gasification. The drying was modelled as a mass transfer from the solid phase to the gas phase. The devolatilization and tar cracking model consist of two steps; the devolatilization of the biomass which is used as a single reaction to generate the biomass gases from the volatile materials and tar cracking. The latter is also modelled as one reaction to generate gases with fixed mass fractions. The first reaction was classified as a heterogeneous reaction while the second reaction was classified as homogenous reaction. The partial combustion and gasification model consisted of carbon combustion reactions and carbon and gas phase reactions. The partial combustion considered was for C, CO, H2 and CH4. The carbon gasification reactions used in this study is the Boudouard reaction with CO2, the reaction with H2O and Methanation (Methane forming reaction) reaction to generate methane. The other gas phase reactions considered in this study are the water gas shift reaction, which is modelled as a reversible reaction and the methane steam reforming reaction. The developed gasification model was validated using different experimental data from the literature and for a wide range of operating conditions. Good agreement was observed, thus confirming the capability of the model in predicting biomass gasification in a CFB to a great accuracy. The developed model has been successfully used to carry out sensitivity and parametric analysis. The sensitivity analysis included: study of the effect of inclusion of various combustion reaction; and the effect of radiation in the gasification reaction. The developed model was also used to carry out parametric analysis by changing the following gasifier operating conditions: fuel/air ratio; biomass flow rates; sand (heat carrier) temperatures; sand flow rates; sand and biomass particle sizes; gasifying agent (pure air or pure steam); pyrolysis models used; steam/biomass ratio. Finally, based on these parametric and sensitivity analysis a final model was recommended for the simulation of biomass gasification in a CFB riser.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A versenyzői munkaerőpiac hagyományos kereslet-kínálati modellje az egyensúlyi bérszintet meghaladó minimálbér következményeként az egyensúlyi bérszint mellettinél alacsonyabb foglalkoztatást jósol; minél magasabb a minimálbér, annál alacsonyabbat. Empirikus vizsgálatok szerint ugyanakkor a minimálbér-emelés nem feltétlenül csökkenti a foglalkoztatást - ezt nevezik minimálbér-paradoxonnak -, ami legkézenfekvőbben a munkáltatók munkaerő-piaci monopszonerejével látszik magyarázhatónak. Ezzel szemben az a gondolatkísérlet, amelyről ez a cikk beszámol, általánosabb érvényű, versenyzői munkaerőpiacot feltételező magyarázat kidolgozására irányul. / === / In the conventional textbook demand/supply model of competitive labour markets, the introduction of a minimum wage above market-clearing level must reduce employment. Empirical findings suggest, however, that this may not always be the case, a finding most readily explained by monopsonistic competition in the labour market. The experimental line of thought reported here explores an alternative root, interpreting the "minimum-wage paradox" as the outcome of a competitive labour market that displays friction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimization of adaptive traffic signal timing is one of the most complex problems in traffic control systems. This dissertation presents a new method that applies the parallel genetic algorithm (PGA) to optimize adaptive traffic signal control in the presence of transit signal priority (TSP). The method can optimize the phase plan, cycle length, and green splits at isolated intersections with consideration for the performance of both the transit and the general vehicles. Unlike the simple genetic algorithm (GA), PGA can provide better and faster solutions needed for real-time optimization of adaptive traffic signal control. ^ An important component in the proposed method involves the development of a microscopic delay estimation model that was designed specifically to optimize adaptive traffic signal with TSP. Macroscopic delay models such as the Highway Capacity Manual (HCM) delay model are unable to accurately consider the effect of phase combination and phase sequence in delay calculations. In addition, because the number of phases and the phase sequence of adaptive traffic signal may vary from cycle to cycle, the phase splits cannot be optimized when the phase sequence is also a decision variable. A "flex-phase" concept was introduced in the proposed microscopic delay estimation model to overcome these limitations. ^ The performance of PGA was first evaluated against the simple GA. The results show that PGA achieved both faster convergence and lower delay for both under- or over-saturated traffic conditions. A VISSIM simulation testbed was then developed to evaluate the performance of the proposed PGA-based adaptive traffic signal control with TSP. The simulation results show that the PGA-based optimizer for adaptive TSP outperformed the fully actuated NEMA control in all test cases. The results also show that the PGA-based optimizer was able to produce TSP timing plans that benefit the transit vehicles while minimizing the impact of TSP on the general vehicles. The VISSIM testbed developed in this research provides a powerful tool to design and evaluate different TSP strategies under both actuated and adaptive signal control. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research investigates a new structural system utilising modular construction. Five-sided boxes are cast on-site and stacked together to form a building. An analytical model was created of a typical building in each of two different analysis programs utilising the finite element method (Robot Millennium and ETABS). The pros and cons of both Robot Millennium and ETABS are listed at several key stages in the development of an analytical model utilising this structural system. Robot Millennium was initially utilised but created an analytical model too large to be successfully run. The computation requirements were too large for conventional computers. Therefore Robot Millennium was abandoned in favour of ETABS, whose more simplistic algorithms and assumptions permitted running this large computation model. Tips are provided as well as pitfalls signalled throughout the process of modelling such complex buildings of this type. ^ The building under high seismic loading required a new horizontal shear mechanism. This dissertation has proposed to create a secondary floor that ties to the modular box through the use of gunwales, and roughened surfaces with epoxy coatings. In addition, vertical connections necessitated a new type of shear wall. These shear walls consisted of waffled external walls tied through both reinforcement and a secondary concrete pour. ^ This structural system has generated a new building which was found to be very rigid compared to a conventional structure. The proposed modular building exhibited a period of 1.27 seconds, which is about one-fifth of a conventional building. The maximum lateral drift occurs under seismic loading with a magnitude of 6.14 inches which is one-quarter of a conventional building's drift. The deflected shape and pattern of the interstorey drifts are consistent with those of a coupled shear wall building. In conclusion, the computer analysis indicate that this new structure exceeds current code requirements for both hurricane winds and high seismic loads, and concomitantly provides a shortened construction time with reduced funding. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis aims to investigate the space of radio reporting and production conditions in radio newscasts in commercial stations operating in modulated frequency (FM) in the city of Natal, Rio Grande do Norte. Through preliminary empirical observation, it was observed that this journalistic style (radio reportage) is hardly present in the schedule of local commercial radio stations, with a lack of in-depth news coverage. The research is based on the content broadcast in four daily radio news programs transmitted by commercial stations. It raises the hypothesis that the crisis that cuts across journalistic companies could be justified by the lack of financial funding in in-depth news reporting. The starting point for the case study (Yin, 2005) was a bibliographical research, for building a theoretical model of reference for the studied style, based on Prado (1989), Bespalhok (2006), Meditsch (2007), Lopes (2013) and Ferraretto (2014). The methodology also included listening to content broadcast during a week in the four analyzed news programs, participant observation and interviews with professionals who produce these programs. There were eight events with similar characteristics to radio reporting, all broadcast in just one of the analyzed programs. According to the interviewees, the format is rarely used because it would generate high businesses costs, which are prohibitively high for the stations. The research also inferred that besides the lack of entrepreneurial vision, there is accommodation of the professionals who could produce news stories even with little structure available. Finally, this work points out the need to invest more on local radio journalism to improve the quality of the information provided on commercial broadcasters in Natal and the training of journalism students to use radio's potential to maximum use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study emerged and founded itself with the aim to analyze the position taken by the family – and the implications in this subjective area – in attending children and adolescents in a Childhood Psychosocial Care Center (CAPSi). A secondary goal has come up as the comprehension surrounding the institutional representation made about the family and the service offered to children, adolescents and relatives. For such, the historical perspective on the relation between the State (laws and institutions) and the family was resumed, and the understanding of how it was reconciled by the medical knowledge and attended the ideological and political means. The social and ideological transformations of the 20th century culminated in the need of change required by the Psychiatric Reform and the achievement of patients on the right to return home and to their families. This new situation, permeated by the attempt of building an assistance model in Mental Health, presented a peculiarity–the close relationship between family and Mental Health services. The observations and conversations at CAPSi that were the investigation objects in this research, intended to learn on the quotidian of families and the possible treatment alternatives that would take into account the family circumstances. Conceiving the status of the families in Mental Health services is a germinal matter yet to be adjusted among the active knowledge in the post-Reform devices. The discussion about the family bonds, anchored in the theoretical perspective of Binding Psychoanalysis brought up elementary concepts such as psychic heritage, denial pact, unconscious alliances, the familiar psychic set, among other. The concept of family organizer helped to think of the family trajectories and stablishes a sort of identity for the family, as well as interferes in the establishment of its boundaries. The family path within the institution, as well as the status they establish facing the institutional approach, reflect the ghosts and the organizers shared by the group-family. It follows that the family is an important protagonist to be considered in the current therapeutic and political processes and, thus, is key to welcome the family bonds for the alliances act over the affective destinations, as well as try to remain sealed from the proposed changes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Forskarnas genuina intresse för den psykosociala arbetsmiljön med koppling mellanchefer gav upphov till att dyka djupare inom området och belysa centrala delar i form av krav, kontroll och socialt stöd. Framåtskridandet går mot en ökad medvetenhet kring den psykosociala arbetsmiljön, då ohälsan i arbetslivet ökar och Arbetsmiljöverkets nya föreskrift om organisatorisk och social arbetsmiljö är i fokus. I linje med en ökad medvetenhet som sker kring den psykosociala arbetsmiljön vill vi rikta ett särskilt fokus till mellanchefer som bör ha en förmåga att hantera krav både uppifrån och ned. Studien har främst utgått från Robert Karaseks och Töres Theorells Krav – kontroll – stödmodellen. Syftet är att undersöka mellanchefers upplevelse med fokus på den psykosociala arbetsmiljön i ett privat företag inom distributions- och logistikverksamhet. Metoden har bestått av en kvalitativ metod i form av en fallstudie där semistrukturerade intervjuer ligger som grund med åtta respondenter ifrån distributionscentret. Resultatet uppvisar att mellancheferna har en hög grad av inflytande, upplevelsen av arbetskrav varierar men i koppling till befattningen är kraven rimliga. Det sociala stödet upplevs som bra på arbetsplatsen och anses av funktionscheferna som en viktig och central del i arbetet. Slutsatser som har uppnåtts är att funktionscheferna har rimliga krav och upplever en bra nivå av kontroll i arbetet, men att ha en alltför hög kontroll i arbetet kan leda till negativ stress. En balans i pendlingen mellan aktiva arbeten och lågstressarbeten anses vara en fördel för att bevara en god psykosocial arbetsmiljö. Detta för att motverka de negativa effekterna som kan uppstå av att befinna sig inom varje komponent för länge. I studien har det påvisats att företaget anses ha en god psykosocial arbetsmiljö och därmed kan ses som ett gott exempel i arbetslivet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

v. 46, n. 2, p. 140-148, apr./jun. 2016.