906 resultados para EQUATION-ERROR MODELS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation aimed to improve travel time estimation for the purpose of transportation planning by developing a travel time estimation method that incorporates the effects of signal timing plans, which were difficult to consider in planning models. For this purpose, an analytical model has been developed. The model parameters were calibrated based on data from CORSIM microscopic simulation, with signal timing plans optimized using the TRANSYT-7F software. Independent variables in the model are link length, free-flow speed, and traffic volumes from the competing turning movements. The developed model has three advantages compared to traditional link-based or node-based models. First, the model considers the influence of signal timing plans for a variety of traffic volume combinations without requiring signal timing information as input. Second, the model describes the non-uniform spatial distribution of delay along a link, this being able to estimate the impacts of queues at different upstream locations of an intersection and attribute delays to a subject link and upstream link. Third, the model shows promise of improving the accuracy of travel time prediction. The mean absolute percentage error (MAPE) of the model is 13% for a set of field data from Minnesota Department of Transportation (MDOT); this is close to the MAPE of uniform delay in the HCM 2000 method (11%). The HCM is the industrial accepted analytical model in the existing literature, but it requires signal timing information as input for calculating delays. The developed model also outperforms the HCM 2000 method for a set of Miami-Dade County data that represent congested traffic conditions, with a MAPE of 29%, compared to 31% of the HCM 2000 method. The advantages of the proposed model make it feasible for application to a large network without the burden of signal timing input, while improving the accuracy of travel time estimation. An assignment model with the developed travel time estimation method has been implemented in a South Florida planning model, which improved assignment results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research addresses the problem of cost estimation for product development in engineer-to-order (ETO) operations. An ETO operation starts the product development process with a product specification and ends with delivery of a rather complicated, highly customized product. ETO operations are practiced in various industries such as engineering tooling, factory plants, industrial boilers, pressure vessels, shipbuilding, bridges and buildings. ETO views each product as a delivery item in an industrial project and needs to make an accurate estimation of its development cost at the bidding and/or planning stage before any design or manufacturing activity starts. ^ Many ETO practitioners rely on an ad hoc approach to cost estimation, with use of past projects as reference, adapting them to the new requirements. This process is often carried out on a case-by-case basis and in a non-procedural fashion, thus limiting its applicability to other industry domains and transferability to other estimators. In addition to being time consuming, this approach usually does not lead to an accurate cost estimate, which varies from 30% to 50%. ^ This research proposes a generic cost modeling methodology for application in ETO operations across various industry domains. Using the proposed methodology, a cost estimator will be able to develop a cost estimation model for use in a chosen ETO industry in a more expeditious, systematic and accurate manner. ^ The development of the proposed methodology was carried out by following the meta-methodology as outlined by Thomann. Deploying the methodology, cost estimation models were created in two industry domains (building construction and the steel milling equipment manufacturing). The models are then applied to real cases; the cost estimates are significantly more accurate than the actual estimates, with mean absolute error rate of 17.3%. ^ This research fills an important need of quick and accurate cost estimation across various ETO industries. It differs from existing approaches to the problem in that a methodology is developed for use to quickly customize a cost estimation model for a chosen application domain. In addition to more accurate estimation, the major contributions are in its transferability to other users and applicability to different ETO operations. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Colleges base their admission decisions on a number of factors to determine which applicants have the potential to succeed. This study utilized data for students that graduated from Florida International University between 2006 and 2012. Two models were developed (one using SAT as the principal explanatory variable and the other using ACT as the principal explanatory variable) to predict college success, measured using the student’s college grade point average at graduation. Some of the other factors that were used to make these predictions were high school performance, socioeconomic status, major, gender, and ethnicity. The model using ACT had a higher R^2 but the model using SAT had a lower mean square error. African Americans had a significantly lower college grade point average than graduates of other ethnicities. Females had a significantly higher college grade point average than males.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Underwater sound is very important in the field of oceanography where it is used for remote sensing in much the same way that radar is used in atmospheric studies. One way to mathematically model sound propagation in the ocean is by using the parabolic-equation method, a technique that allows range dependent environmental parameters. More importantly, this method can model sound transmission where the source emits either a pure tone or a short pulse of sound. Based on the parabolic approximation method and using the split-step Fourier algorithm, a computer model for underwater sound propagation was designed and implemented. This computer model differs from previous models in its use of the interactive mode, structured programming, modular design, and state-of-the-art graphics displays. In addition, the model maximizes the efficiency of computer time through synchronization of loosely coupled dual processors and the design of a restart capability. Since the model is designed for adaptability and for users with limited computer skills, it is anticipated that it will have many applications in the scientific community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sedimentary sections of three cores from the Celtic margin provide high-resolution records of the terrigenous fluxes during the last glacial cycle. A total of 21 14C AMS dates allow us to define age models with a resolution better than 100 yr during critical periods such as Heinrich events 1 and 2. Maximum sedimentary fluxes occurred at the Meriadzek Terrace site during the Last Glacial Maximum (LGM). Detailed X-ray imagery of core MD95-2002 from the Meriadzek Terrace shows no sedimentary structures suggestive of either deposition from high-density turbidity currents or significant erosion. Two paroxysmal terrigenous flux episodes have been identified. The first occurred after the deposition of Heinrich event 2 Canadian ice-rafted debris (IRD) and includes IRD from European sources. We suggest that the second represents an episode of deposition from turbid plumes, which precedes IRD deposition associated with Heinrich event 1. At the end of marine isotopic stage 2 (MIS 2) and the beginning of MIS 1 the highest fluxes are recorded on the Whittard Ridge where they correspond to deposition from turbidity current overflows. Canadian icebergs have rafted debris at the Celtic margin during Heinrich events 1, 2, 4 and 5. The high-resolution records of Heinrich events 1 and 2 show that in both cases the arrival of the Canadian icebergs was preceded by a European ice rafting precursor event, which took place about 1-1.5 kyr before. Two rafting episodes of European IRD also occurred immediately after Heinrich event 2 and just before Heinrich event 1. The terrigenous fluxes recorded in core MD95-2002 during the LGM are the highest reported at hemipelagic sites from the northwestern European margin. The magnitude of the Canadian IRD fluxes at Meriadzek Terrace is similar to those from oceanic sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oxygen and carbon isotope measurements were carried out on tests of planktic foraminifers N. pachyderma (sin.) from eight sediment cores taken from the eastern Arctic Ocean, the Fram Strait, and the lceland Sea, in order to reconstruct Arctic Ocean and Norwegian-Greenland Sea circulation patterns and ice covers during the last 130,000 years. In addition, the influence of ice, temperature and salinity effects on the isotopic signal was quantified. Isotope measurements on foraminifers from sediment surface samples were used to elucidate the ecology of N. pachyderma (sin.). Changes in the oxygen and carbon isotope composition of N. pachyderma (sin.) from sediment surface samples document the horizontal and vertical changes of water mass boundaries controlled by water temperature and salinity, because N. pachyderma (sin.) shows drastic changes in depth habitats, depending on the water mass properties. It was able to be shown that in the investigated areas a regional and spatial apparent increase of the ice effect occurred. This happened especially during the termination I by direct advection of meltwaters from nearby continents or during the termination and in interglacials by supply of isotopically light water from rivers. A northwardly proceeding overprint of the 'global' ice effect, increasing from the Norwegian-Greenland Sea to the Arctic Ocean, was not able to be demonstrated. By means of a model the influence of temperature and salinity on the global ice volume signal during the last 130,000 years was recorded. In combination with the results of this study, the model was the basis for a reconstruction of the paleoceanographic development of the Arctic Ocean and the Norwegian-Greenland Sea during this time interval. The conception of a relatively thick and permanent sea ice cover in the Nordic Seas during glacial times should be replaced by the model of a seasonally and regionally highly variable ice cover. Only during isotope stage 5e may there have been a local deep water formation in the Fram Strait.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose and examine an integrable system of nonlinear equations that generalizes the nonlinear Schrodinger equation to 2 + 1 dimensions. This integrable system of equations is a promising starting point to elaborate more accurate models in nonlinear optics and molecular systems within the continuum limit. The Lax pair for the system is derived after applying the singular manifold method. We also present an iterative procedure to construct the solutions from a seed solution. Solutions with one-, two-, and three-lump solitons are thoroughly discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acknowledgement One of us (AP) wishes to acknowledge S. Flach for enlightening discussions about the relationship between the DNLS equation and the rotor model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new modality for preventing HIV transmission is emerging in the form of topical microbicides. Some clinical trials have shown some promising results of these methods of protection while other trials have failed to show efficacy. Due to the relatively novel nature of microbicide drug transport, a rigorous, deterministic analysis of that transport can help improve the design of microbicide vehicles and understand results from clinical trials. This type of analysis can aid microbicide product design by helping understand and organize the determinants of drug transport and the potential efficacies of candidate microbicide products.

Microbicide drug transport is modeled as a diffusion process with convection and reaction effects in appropriate compartments. This is applied here to vaginal gels and rings and a rectal enema, all delivering the microbicide drug Tenofovir. Although the focus here is on Tenofovir, the methods established in this dissertation can readily be adapted to other drugs, given knowledge of their physical and chemical properties, such as the diffusion coefficient, partition coefficient, and reaction kinetics. Other dosage forms such as tablets and fiber meshes can also be modeled using the perspective and methods developed here.

The analyses here include convective details of intravaginal flows by both ambient fluid and spreading gels with different rheological properties and applied volumes. These are input to the overall conservation equations for drug mass transport in different compartments. The results are Tenofovir concentration distributions in time and space for a variety of microbicide products and conditions. The Tenofovir concentrations in the vaginal and rectal mucosal stroma are converted, via a coupled reaction equation, to concentrations of Tenofovir diphosphate, which is the active form of the drug that functions as a reverse transcriptase inhibitor against HIV. Key model outputs are related to concentrations measured in experimental pharmacokinetic (PK) studies, e.g. concentrations in biopsies and blood. A new measure of microbicide prophylactic functionality, the Percent Protected, is calculated. This is the time dependent volume of the entire stroma (and thus fraction of host cells therein) in which Tenofovir diphosphate concentrations equal or exceed a target prophylactic value, e.g. an EC50.

Results show the prophylactic potentials of the studied microbicide vehicles against HIV infections. Key design parameters for each are addressed in application of the models. For a vaginal gel, fast spreading at small volume is more effective than slower spreading at high volume. Vaginal rings are shown to be most effective if inserted and retained as close to the fornix as possible. Because of the long half-life of Tenofovir diphosphate, temporary removal of the vaginal ring (after achieving steady state) for up to 24h does not appreciably diminish Percent Protected. However, full steady state (for the entire stromal volume) is not achieved until several days after ring insertion. Delivery of Tenofovir to the rectal mucosa by an enema is dominated by surface area of coated mucosa and whether the interiors of rectal crypts are filled with the enema fluid. For the enema 100% Percent Protected is achieved much more rapidly than for vaginal products, primarily because of the much thinner epithelial layer of the mucosa. For example, 100% Percent Protected can be achieved with a one minute enema application, and 15 minute wait time.

Results of these models have good agreement with experimental pharmacokinetic data, in animals and clinical trials. They also improve upon traditional, empirical PK modeling, and this is illustrated here. Our deterministic approach can inform design of sampling in clinical trials by indicating time periods during which significant changes in drug concentrations occur in different compartments. More fundamentally, the work here helps delineate the determinants of microbicide drug delivery. This information can be the key to improved, rational design of microbicide products and their dosage regimens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to identify the relationship between subjective well-being and economic insecurity for public and private sector workers in Ireland using the European Social Survey 2010-2012. Life satisfaction and job satisfaction are the indicators used to measure subjective well-being. Economic insecurity is approximated by regional unemployment rates and self-perceived job insecurity. Potential sample selection bias and endogeneity bias are accounted for. It is traditionally believed that public sector workers are relatively more protected against insecurity due to very institution of public sector employment. The institution of public sector employment is made up of stricter dismissal practices (Luechinger et al., 2010a) and less volatile employment (Freeman, 1987) where workers are subsequently less likely to be affected by business cycle downturns (Clark and Postal-Vinay, 2009). It is found in the literature that economic insecurity depresses the well-being of public sector workers to a lesser degree than private sector workers (Luechinger et al., 2010a; Artz and Kaya, 2014). These studies provide the rationale for this thesis in testing for similar relationships in an Irish context. Sample selection bias arises when a selection into a particular category is not random (Heckman, 1979). An example of this is non-random selection into public sector employment based on personal characteristics (Heckman, 1979; Luechinger et al., 2010b). If selection into public sector employment is not corrected for this can lead to biased and inconsistent estimators (Gujarati, 2009). Selection bias of public sector employment is corrected for by using a standard Two-Step Heckman Probit OLS estimation method. Following Luechinger et al. (2010b), the propensity for individuals to select into public sector employment is estimated by a binomial probit model with the inclusion of the additional regressor Irish citizenship. Job satisfaction is then estimated by Ordinary Least Squares (OLS) with the inclusion of a sample correction term similar as is done in Clark (1997). Endogeneity is where an independent variable included in the model is determined within in the context of the model (Chenhall and Moers, 2007). The econometric definition states that an endogenous independent variable is one that is correlated with the error term (Wooldridge, 2010). Endogeneity is expected to be present due to a simultaneous relationship between job insecurity and job satisfaction whereby both variables are jointly determined (Theodossiou and Vasileiou, 2007). Simultaneity, as an instigator of endogeneity, is corrected for using Instrumental Variables (IV) techniques. Limited Information Methods and Full Information Methods of estimation of simultaneous equations models are assed and compared. The general results show that job insecurity depresses the subjective well-being of all workers in both the public and private sectors in Ireland. The magnitude of this effect differs among sectoral workers. The subjective well-being of private sector workers is more adversely affected by job insecurity than the subjective well-being of public sector workers. This is observed in basic ordered probit estimations of both a life satisfaction equation and a job satisfaction equation. The marginal effects from the ordered probit estimation of a basic job satisfaction equation show that as job insecurity increases the probability of reporting a 9 on a 10-point job satisfaction scale significantly decreases by 3.4% for the whole sample of workers, 2.8% for public sector workers and 4.0% for private sector workers. Artz and Kaya (2014) explain that as a result of many austerity policies implemented to reduce government expenditure during the economic recession, workers in the public sector may for the first time face worsening perceptions of job security which can have significant implications for their well-being (Artz and Kaya, 2014). This can be observed in the marginal effects where job insecurity negatively impacts the well-being of public sector workers in Ireland. However, in accordance with Luechinger et al. (2010a) the results show that private sector workers are more adversely impacted by economic insecurity than public sector workers. This suggests that in a time of high economic volatility, the institution of public sector employment held and was able to protect workers against some of the well-being consequences of rising insecurity. In estimating the relationship between subjective well-being and economic insecurity advanced econometric issues arise. The results show that when selection bias is corrected for, any statistically significant relationship between job insecurity and job satisfaction disappears for public sector workers. Additionally, in order to correct for endogeneity bias the simultaneous equations model for job satisfaction and job insecurity is estimated by Limited Information and Full Information Methods. The results from two different estimators classified as Limited Information Methods support the general findings of this research. Moreover, the magnitude of the endogeneity-corrected estimates are twice as large as those not corrected for endogeneity bias which is similarly found in Geishecker (2010, 2012). As part of the analysis into the effect of economic insecurity on subjective well-being, the effects of other socioeconomic variables and work-related variables are examined for public and private sector workers in Ireland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Key life history traits such as breeding time and clutch size are frequently both heritable and under directional selection, yet many studies fail to document micro-evolutionary responses. One general explanation is that selection estimates are biased by the omission of correlated traits that have causal effects on fitness, but few valid tests of this exist. Here we show, using a quantitative genetic framework and six decades of life-history data on two free-living populations of great tits Parus major, that selection estimates for egg-laying date and clutch size are relatively unbiased. Predicted responses to selection based on the Robertson-Price Identity were similar to those based on the multivariate breeder’s equation, indicating that unmeasured covarying traits were not missing from the analysis. Changing patterns of phenotypic selection on these traits (for laying date, linked to climate change) therefore reflect changing selection on breeding values, and genetic constraints appear not to limit their independent evolution. Quantitative genetic analysis of correlational data from pedigreed populations can be a valuable complement to experimental approaches to help identify whether apparent associations between traits and fitness are biased by missing traits, and to parse the roles of direct versus indirect selection across a range of environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The annotation of Business Dynamics models with parameters and equations, to simulate the system under study and further evaluate its simulation output, typically involves a lot of manual work. In this paper we present an approach for automated equation formulation of a given Causal Loop Diagram (CLD) and a set of associated time series with the help of neural network evolution (NEvo). NEvo enables the automated retrieval of surrogate equations for each quantity in the given CLD, hence it produces a fully annotated CLD that can be used for later simulations to predict future KPI development. In the end of the paper, we provide a detailed evaluation of NEvo on a business use-case to demonstrate its single step prediction capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid simulation is a technique that combines experimental and numerical testing and has been used for the last decades in the fields of aerospace, civil and mechanical engineering. During this time, most of the research has focused on developing algorithms and the necessary technology, including but not limited to, error minimisation techniques, phase lag compensation and faster hydraulic cylinders. However, one of the main shortcomings in hybrid simulation that has pre- vented its widespread use is the size of the numerical models and the effect that higher frequencies may have on the stability and accuracy of the simulation. The first chapter in this document provides an overview of the hybrid simulation method and the different hybrid simulation schemes, and the corresponding time integration algorithms, that are more commonly used in this field. The scope of this thesis is presented in more detail in chapter 2: a substructure algorithm, the Substep Force Feedback (Subfeed), is adapted in order to fulfil the necessary requirements in terms of speed. The effects of more complex models on the Subfeed are also studied in detail, and the improvements made are validated experimentally. Chapters 3 and 4 detail the methodologies that have been used in order to accomplish the objectives mentioned in the previous lines, listing the different cases of study and detailing the hardware and software used to experimentally validate them. The third chapter contains a brief introduction to a project, the DFG Subshake, whose data have been used as a starting point for the developments that are shown later in this thesis. The results obtained are presented in chapters 5 and 6, with the first of them focusing on purely numerical simulations while the second of them is more oriented towards a more practical application including experimental real-time hybrid simulation tests with large numerical models. Following the discussion of the developments in this thesis is a list of hardware and software requirements that have to be met in order to apply the methods described in this document, and they can be found in chapter 7. The last chapter, chapter 8, of this thesis focuses on conclusions and achievements extracted from the results, namely: the adaptation of the hybrid simulation algorithm Subfeed to be used in conjunction with large numerical models, the study of the effect of high frequencies on the substructure algorithm and experimental real-time hybrid simulation tests with vibrating subsystems using large numerical models and shake tables. A brief discussion of possible future research activities can be found in the concluding chapter.