936 resultados para Almost Identical Demand Systems model
Resumo:
This paper describes the development of an ontology for autonomous systems, as the initial stage of a research programe on autonomous systems’ engineering within a model-based control approach. The ontology aims at providing a unified conceptual framework for the autonomous systems’ stakeholders, from developers to software engineers. The modular ontology contains both generic and domain-specific concepts for autonomous systems description and engineering. The ontology serves as the basis in a methodology to obtain the autonomous system’s conceptual models. The objective is to obtain and to use these models as main input for the autonomous system’s model-based control system.
Resumo:
High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.
Resumo:
In American society, the incidence of divorce continues to rise. In 1974, the estimate was that 40% of all new marriages would end in divorce. When children are involved, the mother usually regains custody. Although the number of children of divorce living with their fathers is increasing, it is still a small percent. In addition, the rate of remarriages is lower when children are involved (Hetherington.et al.,1977). Consequently, a large number of children are being raised in father-absent homes, and indications are that the numbers are increasing. A recent Denver Post article predicted that 50% of all children now being born will spend some of their childhood in a single-parent home. In terms of frequency, the father-absent family is becoming quite common, even "normal," yet it often continues to be considered a "broken" home and, when compared to the two-parent family, an inadequate structure in which to raise healthy children. Since father-absent families are so common these days, this opinion is in need of review.This paper will present a review of the father absence research in three areas: sex role development, cognitive development and personality development. The role of moderator variables will be discussed. And, finally,an open systems model will be proposed as a vehicle to better understand the effects of father absence and as a guide for future research.
Resumo:
Mode of access: Internet.
Resumo:
Whilst traditional optimisation techniques based on mathematical programming techniques are in common use, they suffer from their inability to explore the complexity of decision problems addressed using agricultural system models. In these models, the full decision space is usually very large while the solution space is characterized by many local optima. Methods to search such large decision spaces rely on effective sampling of the problem domain. Nevertheless, problem reduction based on insight into agronomic relations and farming practice is necessary to safeguard computational feasibility. Here, we present a global search approach based on an Evolutionary Algorithm (EA). We introduce a multi-objective evaluation technique within this EA framework, linking the optimisation procedure to the APSIM cropping systems model. The approach addresses the issue of system management when faced with a trade-off between economic and ecological consequences.
Resumo:
The survival of organisations, especially SMEs, depends, to the greatest extent, on those who supply them with the required material input. This is because if the supplier fails to deliver the right materials at the right time and place, and at the right price, then the recipient organisation is bound to fail in its obligations to satisfy the needs of its customers, and to stay in business. Hence, the task of choosing a supplier(s) from a list of vendors, that an organisation will trust with its very existence, is not an easy one. This project investigated how purchasing personnel in organisations solve the problem of vendor selection. The investigation went further to ascertain whether an Expert Systems model could be developed and used as a plausible solution to the problem. An extensive literature review indicated that very scanty research has been conducted in the area of Expert Systems for Vendor Selection, whereas many research theories in expert systems and in purchasing and supply management chain, respectively, had been reported. A survey questionnaire was designed and circulated to people in the industries who actually perform the vendor selection tasks. Analysis of the collected data confirmed the various factors which are considered during the selection process, and established the order in which those factors are ranked. Five of the factors, namely, Production Methods Used, Vendors Financial Background, Manufacturing Capacity, Size of Vendor Organisations, and Suppliers Position in the Industry; appeared to have similar patterns in the way organisations ranked them. These patterns suggested that the bigger the organisation, the more importantly they regarded the above factors. Further investigations revealed that respondents agreed that the most important factors were: Product Quality, Product Price and Delivery Date. The most apparent pattern was observed for the Vendors Financial Background. This generated curiosity which led to the design and development of a prototype expert system for assessing the financial profile of a potential supplier(s). This prototype was called ESfNS. It determines whether a prospective supplier(s) has good financial background or not. ESNS was tested by the potential users who then confirmed that expert systems have great prospects and commercial viability in the domain for solving vendor selection problems.
Resumo:
The reliability of the printed circuit board assembly under dynamic environments, such as those found onboard airplanes, ships and land vehicles is receiving more attention. This research analyses the dynamic characteristics of the printed circuit board (PCB) supported by edge retainers and plug-in connectors. By modelling the wedge retainer and connector as providing simply supported boundary condition with appropriate rotational spring stiffnesses along their respective edges with the aid of finite element codes, accurate natural frequencies for the board against experimental natural frequencies are obtained. For a PCB supported by two opposite wedge retainers and a plug-in connector and with its remaining edge free of any restraint, it is found that these real supports behave somewhere between the simply supported and clamped boundary conditions and provide a percentage fixity of 39.5% more than the classical simply supported case. By using an eigensensitivity method, the rotational stiffnesses representing the boundary supports of the PCB can be updated effectively and is capable of representing the dynamics of the PCB accurately. The result shows that the percentage error in the fundamental frequency of the PCB finite element model is substantially reduced from 22.3% to 1.3%. The procedure demonstrated the effectiveness of using only the vibration test frequencies as reference data when the mode shapes of the original untuned model are almost identical to the referenced modes/experimental data. When using only modal frequencies in model improvement, the analysis is very much simplified. Furthermore, the time taken to obtain the experimental data will be substantially reduced as the experimental mode shapes are not required.In addition, this thesis advocates a relatively simple method in determining the support locations for maximising the fundamental frequency of vibrating structures. The technique is simple and does not require any optimisation or sequential search algorithm in the analysis. The key to the procedure is to position the necessary supports at positions so as to eliminate the lower modes from the original configuration. This is accomplished by introducing point supports along the nodal lines of the highest possible mode from the original configuration, so that all the other lower modes are eliminated by the introduction of the new or extra supports to the structure. It also proposes inspecting the average driving point residues along the nodal lines of vibrating plates to find the optimal locations of the supports. Numerical examples are provided to demonstrate its validity. By applying to the PCB supported on its three sides by two wedge retainers and a connector, it is found that a single point constraint that would yield maximum fundamental frequency is located at the mid-point of the nodal line, namely, node 39. This point support has the effect of increasing the structure's fundamental frequency from 68.4 Hz to 146.9 Hz, or 115% higher.
Resumo:
A dolgozat célja egy vállalati gyakorlatból származó eset elemzése. Egy könyvkiadót tekintünk. A kiadó kapcsolatban van kis- és nagykereskedőkkel, valamint a fogyasztók egy csoportjával is vannak kapcsolatai. A könyvkiadók projekt rendszerben működnek. A kiadó azzal a problémával szembesül, hogy hogyan ossza el egy frissen kiadott és nyomtatott könyv példányszámait a kis- és nagykereskedők között, valamint mekkora példányszámot tároljon maga a fogyasztók közvetlen kielégítésére. A kiadóról feltételezzük, hogy visszavásárlási szerződése van a kereskedőkkel. A könyv iránti kereslet nem ismert, de becsülhető. A kis- és nagykereskedők maximalizálják a nyereségüket. = The aim of the paper is to analyze a practical real world problem. A publishing house is given. The publishing firm has contacts to a number of wholesaler / retailer enterprises and direct contact to customers to satisfy the market demand. The book publishers work in a project industry. The publisher faces with the problem how to allocate the stocks of a given, newly published book to the wholesaler and retailer, and to hold some copies to satisfy the customers direct from the publisher. The publisher has a buyback option. The distribution of the demand is unknown, but it can be estimated. The wholesaler / retailer maximize the profits. The problem can be modeled as a one-warehouse and N-retailer supply chain with not identical demand distribution. The model can be transformed in a game theory problem. It is assumed that the demand distribution follows a Poisson distribution.
Resumo:
Egy könyvkiadó vállalatot vizsgálunk. A kiadó kiadványait a szokásos értékesítési láncon (kis- és nagykereskedelem) keresztül értékesíti. A kérdés az, hogy egy új könyv példányait hogyan allokálja az értékesítési láncban. Feltételezzük, hogy a kereslet ismert, Poisson-eloszlású. A készletezés költségeit szintén ismertnek tételezzük fel. Cél a költségek minimalizálása. = The aim of the paper is to analyze a practical real world problem. A publishing house is given. The publishing firm has contacts to a number of wholesaler / retailer enterprises and direct contact to customers to satisfy the market demand. The book publishers work in a project industry. The publisher faces with the problem to allocate the stocks of a given, newly published book to the wholesaler and retailer, and to hold some copies to satisfy the customers direct from the publisher. The distribution of the demand is unknown, but it can be estimated. The costs consist of inventory holding and shortage, backorder costs. The decision maker wants to minimize these relevant costs. The problem can be modeled as a one-warehouse and N-retailer supply chain with not identical demand distribution. The problem structure is similar that of a newsvendor model. It is assumed that the demand distribution follows a Poisson distribution.
Resumo:
Cytochrome P450 monooxygenases, one of the most important classes of heme-thiolate proteins, have attracted considerable interest in the biochemical community because of its catalytic versatility, substrate diversity and great number in the superfamily. Although P450s are capable of catalyzing numerous difficult oxidation reactions, the relatively low stability, low turnover rates and the need of electron-donating cofactors have limited their practical biotechnological and pharmaceutical applications as isolated enzymes. The goal of this study is to tailor such heme-thiolate proteins into efficient biocatalysts with high specificity and selectivity by protein engineering and to better understand the structure-function relationship in cytochromes P450. In the effort to engineer P450cam, the prototype member of the P450 superfamily, into an efficient peroxygenase that utilizes hydrogen peroxide via the “peroxide-shunt” pathway, site-directed mutagenesis has been used to elucidate the critical roles of hydrophobic residues in the active site. Various biophysical, biochemical and spectroscopic techniques have been utilized to investigate the wild-type and mutant proteins. Three important P450cam variants were obtained showing distinct structural and functional features. In P450camV247H mutant, which exhibited almost identical spectral properties with the wild-type, it is demonstrated that a single amino acid switch turned the monooxygenase into an efficient preoxidase by increasing the peroxidase activity nearly one thousand folds. In order to tune the distal pocket of P450cam with polar residues, Leu 246 was replaced with a basic residue, lysine, resulting in a mutant with spectral features identical to P420, the inactive species of P450. But this inactive-species-like mutant showed catalytic activities without the facilitation of any cofactors. By substituting Gly 248 with a histidine, a novel Cys-Fe-His ligation set was obtained in P450cam which represented the very rare case of His ligation in heme-thiolate proteins. In addition to serving as a convenient model for hemoprotein structural studies, the G248H mutant also provided evidence about the nature of the axial ligand in cytochrome P420 and other engineered hemoproteins with thiolate ligations. Furthermore, attempts have been made to replace the proximal ligand in sperm whale myoglobin to construct a heme-thiolate protein model by mimicking the protein environment of cytochrome P450cam and chloroperoxidase.
Resumo:
Cytochrome P450 monooxygenases, one of the most important classes of heme-thiolate proteins, have attracted considerable interest in the biochemical community because of its catalytic versatility, substrate diversity and great number in the superfamily. Although P450s are capable of catalyzing numerous difficult oxidation reactions, the relatively low stability, low turnover rates and the need of electron-donating cofactors have limited their practical biotechnological and pharmaceutical applications as isolated enzymes. The goal of this study is to tailor such heme-thiolate proteins into efficient biocatalysts with high specificity and selectivity by protein engineering and to better understand the structure-function relationship in cytochromes P450. In the effort to engineer P450cam, the prototype member of the P450 superfamily, into an efficient peroxygenase that utilizes hydrogen peroxide via the “peroxide-shunt” pathway, site-directed mutagenesis has been used to elucidate the critical roles of hydrophobic residues in the active site. Various biophysical, biochemical and spectroscopic techniques have been utilized to investigate the wild-type and mutant proteins. Three important P450cam variants were obtained showing distinct structural and functional features. In P450camV247H mutant, which exhibited almost identical spectral properties with the wild-type, it is demonstrated that a single amino acid switch turned the monooxygenase into an efficient preoxidase by increasing the peroxidase activity nearly one thousand folds. In order to tune the distal pocket of P450cam with polar residues, Leu 246 was replaced with a basic residue, lysine, resulting in a mutant with spectral features identical to P420, the inactive species of P450. But this inactive-species-like mutant showed catalytic activities without the facilitation of any cofactors. By substituting Gly 248 with a histidine, a novel Cys-Fe-His ligation set was obtained in P450cam which represented the very rare case of His ligation in heme-thiolate proteins. In addition to serving as a convenient model for hemoprotein structural studies, the G248H mutant also provided evidence about the nature of the axial ligand in cytochrome P420 and other engineered hemoproteins with thiolate ligations. Furthermore, attempts have been made to replace the proximal ligand in sperm whale myoglobin to construct a heme-thiolate protein model by mimicking the protein environment of cytochrome P450cam and chloroperoxidase.
Resumo:
The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.
Resumo:
Flemish Pass, located at the western subpolar margin, is a passage (sill depth 1200 m) that is constrained by the Grand Banks and the underwater plateau Flemish Cap. In addition to the Deep Western Boundary Current (DWBC) pathway offshore of Flemish Cap, Flemish Pass represents another southward transport pathway for two modes of Labrador Sea Water (LSW), the lightest component of North Atlantic Deep Water carried with the DWBC. This pathway avoids potential stirring regions east of Flemish Cap and deflection into the interior North Atlantic. Ship-based velocity measurements between 2009 and 2013 at 47°N in Flemish Pass and in the DWBC east of Flemish Cap revealed a considerable southward transport of Upper LSW through Flemish Pass (15-27%, -1.0 to -1.5 Sv). About 98% of the denser Deep LSW were carried around Flemish Cap as Flemish Pass is too shallow for considerable transport of Deep LSW. Hydrographic time series from ship-based measurements show a significant warming of 0.3°C/decade and a salinification of 0.03/decade of the Upper LSW in Flemish Pass between 1993 and 2013. Almost identical trends were found for the evolution in the Labrador Sea and in the DWBC east of Flemish Cap. This indicates that the long-term hydrographic variability of Upper LSW in Flemish Pass as well as in the DWBC at 47°N is dominated by changes in the Labrador Sea, which are advected southward. Fifty years of numerical ocean model simulations in Flemish Pass suggest that these trends are part of a multidecadal cycle.
Resumo:
We analyzed projections of current and future ambient temperatures along the eastern United States in relationship to the thermal tolerance of harbor seals in air. Using the earth systems model (HadGEM2-ES) and representative concentration pathways (RCPs) 4.5 and 8.5, which are indicative of two different atmospheric CO2 concentrations, we were able to examine possible shifts in distribution based on three metrics: current preferences, the thermal limit of juveniles, and the thermal limits of adults. Our analysis focused on average ambient temperatures because harbor seals are least effective at regulating their body temperature in air, making them most susceptible to rising air temperatures in the coming years. Our study focused on the months of May, June, and August from 2041-2060 (2050) and 2061-2080 (2070) as these are the historic months in which harbor seals are known to annually come ashore to pup, breed, and molt. May, June, and August are also some of the warmest months of the year. We found that breeding colonies along the eastern United States will be limited by the thermal tolerance of juvenile harbor seals in air, while their foraging range will extend as far south as the thermal tolerance of adult harbor seals in air. Our analysis revealed that in 2070, harbor seal pups should be absent from the United States coastline nearing the end of the summer due to exceptionally high air temperatures.
Resumo:
Résumé : Ce document examine l'effet de la dette publique et du développement monétaire étranger (taux de change et taux d'intérêt étranger) sur la demande de monnaie de long-terme. Le déficit budgétaire est utilisé comme mesure de la dette publique. Cette étude est menée sur cinq pays industrialisés: le Canada, les États-Unis, l'Allemagne, le Royaume-Uni et la France. Le modèle multivarié de cointégration de Johansen & Juselius (1990) est utilisé pour établir le lien entre ces trois variables et la demande de monnaie. Ce modèle examine indirectement deux effets: les effets du déficit budgétaire sur le taux d'intérêt et du développement monétaire étranger sur le taux d'intérêt, à travers la demande de monnaie. L'évidence d'une relation de cointégration entre la demande de monnaie et les dites variables est vérifiée pour la plupart de ces pays. Le test d'exclusion des variables de la relation de long-terme nous révèle que toutes ces variables entrent de façon significative dans la relation de cointégration. Ces résultats suggèrent donc aux autorités monétaires, l'importance de tenir compte à la fois du déficit bugétaire et du développement monétaire étranger dans la formulation de la politique monétaire.||Abstract : This paper examines the impact of both public debt and foreign monetary developments (exchange rate and interest rate) on the long-run money demand. The budget déficit is used as a measure of public debt. Five industrial countries are considered, Canada, the United States, Germany, the United Kingdom and France. The multivariate cointegration model of Johansen & Juselius (1990) is used to establish the relationship between this tree variables and the money demand. This model indirectly examines two effects, the effect of budget déficits on interest rates and the effect of foreign monetary developments on the interest rates, both through money demand. Evidence of long-run relationship between the money demand and the defined variables are found for almost every country. The long-run exclusion test shows that ail these variables significantly enter into the cointegration relation. This suggests that, in formulating monetary policies, policy makers should take into account the influence of both budget déficit and foreign monetary developments on the money demand.