968 resultados para Construction set
Resumo:
Software Product Line Engineering (SPLE) has proved to have significant advantages in family-based software development, but also implies the up¬front design of a product-line architecture (PLA) from which individual product applications can be engineered. The big upfront design associated with PLAs is in conflict with the current need of "being open to change". However, the turbulence of the current business climate makes change inevitable in order to stay competitive, and requires PLAs to be open to change even late in the development. The trend of "being open to change" is manifested in the Agile Software Development (ASD) paradigm, but it is spreading to the domain of SPLE. To reduce the big upfront design of PLAs as currently practiced in SPLE, new paradigms are being created, one being Agile Product Line Engineering (APLE). APLE aims to make the development of product-lines more flexible and adaptable to changes as promoted in ASD. To put APLE into practice it is necessary to make mechanisms available to assist and guide the agile construction and evolution of PLAs while complying with the "be open to change" agile principle. This thesis defines a process for "the agile construction and evolution of product-line architectures", which we refer to as Agile Product-Line Archi-tecting (APLA). The APLA process provides agile architects with a set of models for describing, documenting and tracing PLAs, as well as an algorithm to analyze change impact. Both the models and the change impact analysis offer the following capabilities: Flexibility & adaptability at the time of defining software architectures, enabling change during the incremental and iterative design of PLAs (anticipated or planned changes) and their evolution (unanticipated or unforeseen changes). Assistance in checking architectural integrity through change impact analysis in terms of architectural concerns, such as dependencies on earlier design decisions, rationale, constraints, and risks, etc.Guidance in the change decision-making process through change im¬pact analysis in terms of architectural components and connections. Therefore, APLA provides the mechanisms required to construct and evolve PLAs that can easily be refined iteration after iteration during the APLE development process. These mechanisms are provided in a modeling frame¬work called FPLA. The contributions of this thesis have been validated through the conduction of a project regarding a metering management system in electrical power networks. This case study took place in an i-smart software factory and was in collaboration with the Technical University of Madrid and Indra Software Labs. La Ingeniería de Líneas de Producto Software (Software Product Line Engi¬neering, SPLE) ha demostrado tener ventajas significativas en el desarrollo de software basado en familias de productos. SPLE es un paradigma que se basa en la reutilización sistemática de un conjunto de características comunes que comparten los productos de un mismo dominio o familia, y la personalización masiva a través de una variabilidad bien definida que diferencia unos productos de otros. Este tipo de desarrollo requiere el diseño inicial de una arquitectura de línea de productos (Product-Line Architecture, PLA) a partir de la cual los productos individuales de la familia son diseñados e implementados. La inversión inicial que hay que realizar en el diseño de PLAs entra en conflicto con la necesidad actual de estar continuamente "abierto al cam¬bio", siendo este cambio cada vez más frecuente y radical en la industria software. Para ser competitivos es inevitable adaptarse al cambio, incluso en las últimas etapas del desarrollo de productos software. Esta tendencia se manifiesta de forma especial en el paradigma de Desarrollo Ágil de Software (Agile Software Development, ASD) y se está extendiendo también al ámbito de SPLE. Con el objetivo de reducir la inversión inicial en el diseño de PLAs en la manera en que se plantea en SPLE, en los último años han surgido nuevos enfoques como la Ingeniera de Líneas de Producto Software Ágiles (Agile Product Line Engineering, APLE). APLE propone el desarrollo de líneas de producto de forma más flexible y adaptable a los cambios, iterativa e incremental. Para ello, es necesario disponer de mecanismos que ayuden y guíen a los arquitectos de líneas de producto en el diseño y evolución ágil de PLAs, mientras se cumple con el principio ágil de estar abierto al cambio. Esta tesis define un proceso para la "construcción y evolución ágil de las arquitecturas de lineas de producto software". A este proceso se le ha denominado Agile Product-Line Architecting (APLA). El proceso APLA proporciona a los arquitectos software un conjunto de modelos para de¬scribir, documentar y trazar PLAs, así como un algoritmo para analizar vel impacto del cambio. Los modelos y el análisis del impacto del cambio ofrecen: Flexibilidad y adaptabilidad a la hora de definir las arquitecturas software, facilitando el cambio durante el diseño incremental e iterativo de PLAs (cambios esperados o previstos) y su evolución (cambios no previstos). Asistencia en la verificación de la integridad arquitectónica mediante el análisis de impacto de los cambios en términos de dependencias entre decisiones de diseño, justificación de las decisiones de diseño, limitaciones, riesgos, etc. Orientación en la toma de decisiones derivadas del cambio mediante el análisis de impacto de los cambios en términos de componentes y conexiones. De esta manera, APLA se presenta como una solución para la construcción y evolución de PLAs de forma que puedan ser fácilmente refinadas iteración tras iteración de un ciclo de vida de líneas de producto ágiles. Dicha solución se ha implementado en una herramienta llamada FPLA (Flexible Product-Line Architecture) y ha sido validada mediante su aplicación en un proyecto de desarrollo de un sistema de gestión de medición en redes de energía eléctrica. Dicho proyecto ha sido desarrollado en una fábrica de software global en colaboración con la Universidad Politécnica de Madrid e Indra Software Labs.
Resumo:
La comparación de las diferentes ofertas presentadas en la licitación de un proyecto,con el sistema de contratación tradicional de medición abierta y precio unitario cerrado, requiere herramientas de análisis que sean capaces de discriminar propuestas que teniendo un importe global parecido pueden presentar un impacto económico muy diferente durante la ejecución. Una de las situaciones que no se detecta fácilmente con los métodos tradicionales es el comportamiento del coste real frente a las variaciones de las cantidades realmente ejecutadas en obra respecto de las estimadas en el proyecto. Este texto propone abordar esta situación mediante un sistema de análisis cuantitativo del riesgo como el método de Montecarlo. Este procedimiento, como es sabido, consiste en permitir que los datos de entrada que definen el problema varíen unas funciones de probabilidad definidas, generar un gran número de casos de prueba y tratar los resultados estadísticamente para obtener los valores finales más probables,con los parámetros necesarios para medir la fiabilidad de la estimación. Se presenta un modelo para la comparación de ofertas, desarrollado de manera que puede aplicarse en casos reales aplicando a los datos conocidos unas condiciones de variación que sean fáciles de establecer por los profesionales que realizan estas tareas. ABSTRACT: The comparison of the different bids in the tender for a project, with the traditional contract system based on unit rates open to and re-measurement, requires analysis tools that are able to discriminate proposals having a similar overall economic impact, but that might show a very different behaviour during the execution of the works. One situation not easily detected by traditional methods is the reaction of the actual cost to the changes in the exact quantity of works finally executed respect of the work estimated in the project. This paper intends to address this situation through the Monte Carlo method, a system of quantitative risk analysis. This procedure, as is known, is allows the input data defining the problem to vary some within well defined probability functions, generating a large number of test cases, the results being statistically treated to obtain the most probable final values, with the rest of the parameters needed to measure the reliability of the estimate. We present a model for the comparison of bids, designed in a way that it can be applied in real cases, based on data and assumptions that are easy to understand and set up by professionals who wish to perform these tasks.
Resumo:
An intrinsic feature of yeast artificial chromosomes (YACs) is that the cloned DNA is generally in the same size range (i.e., approximately 200-2000 kb) as the endogenous yeast chromosomes. As a result, the isolation of YAC DNA, which typically involves separation by pulsed-field gel electrophoresis, is frequently confounded by the presence of a comigrating or closely migrating endogenous yeast chromosome(s). We have developed a strategy that reliably allows the isolation of any YAC free of endogenous yeast chromosomes. Using recombination-mediated chromosome fragmentation, a set of Saccharomyces cerevisiae host strains was systematically constructed. Each strain contains defined alterations in its electrophoretic karyotype, which provide a large-size interval devoid of endogenous chromosomes (i.e., a karyotypic "window"). All of the constructed strains contain the kar1-delta 15 mutation, thereby allowing the efficient transfer of a YAC from its original host into an appropriately selected window strain using the kar1-transfer procedure. This approach provides a robust and efficient means to obtain relatively pure YAC DNA regardless of YAC size.
Resumo:
The potential of integrating multiagent systems and virtual environments has not been exploited to its whole extent. This paper proposes a model based on grammars, called Minerva, to construct complex virtual environments that integrate the features of agents. A virtual world is described as a set of dynamic and static elements. The static part is represented by a sequence of primitives and transformations and the dynamic elements by a series of agents. Agent activation and communication is achieved using events, created by the so-called event generators. The grammar defines a descriptive language with a simple syntax and a semantics, defined by functions. The semantics functions allow the scene to be displayed in a graphics device, and the description of the activities of the agents, including artificial intelligence algorithms and reactions to physical phenomena. To illustrate the use of Minerva, a practical example is presented: a simple robot simulator that considers the basic features of a typical robot. The result is a functional simple simulator. Minerva is a reusable, integral, and generic system, which can be easily scaled, adapted, and improved. The description of the virtual scene is independent from its representation and the elements that it interacts with.
Resumo:
It has been widely documented that when Building Information Modelling (BIM) is used, there is a shift in effort to the design phase. Little investigation into the impact of this shift in effort has been done and how it impacts on costs. It can be difficult to justify the increased expenditure on BIM in a market that is heavily driven by costs. There are currently studies attempting to quantify the return on investment (ROI) for BIM for which these returns can be seen to balance out the shift in efforts and costs to the design phase. The studies however quantify the ROI based on the individual stakeholder’s investment without consideration for the impact that the use of BIM from their project partners may have on their own profitability. In this study, a questionnaire investigated opinions and experience of construction professionals, representing clients, consultants, designers and contractors, to determine fluctuations in costs by their magnitude and when they occur. These factors were examined more closely by interviewing senior members representing each of the stakeholder categories and comparing their experience in using BIM within environments where their project partners were also using BIM and when they were not. This determined the differences in how the use and the investment in BIM impacts on others and how costs are redistributed. This redistribution is not just through time but also between stakeholders and categories of costs. Some of these cost fluctuations and how the cost of BIM is currently financed are also highlighted in several case studies. The results show that the current distribution of costs set for traditional 2D delivery is hindering the potential success of BIM. There is also evidence that stakeholders who don’t use BIM may benefit financially from the BIM use of others and that collaborative BIM is significantly different to the use of ‘lonely’ BIM in terms of benefits and profitability.
Resumo:
Beneath the relations among states, and distinct from the exchanges of an autonomous regional or global civil society, there is another set of international practices which is neither public nor private but parapublic. The Franco-German parapublic underpinnings consist of publicly funded youth and educational exchanges, some two thousand city and regional partnerships, a host of institutes and associations concerned with Franco-German matters, and various other parapublic elements. This institutional reality provides resources, socializes the participants of its programs, and generates social meaning. Simultaneously, parapublic activity faces severe limits. In this paper I clarify the concept of “parapublic underpinnings” of international relations and flesh out their characteristics for the relationship between France and Germany. I then evaluate the effects and limits of this type of activity, and relate this paper’s findings and arguments to recent research on transnationalism, Europeanization, and denationalization.
Resumo:
Some numbers have special subtitles, as follows: no. 5, sect. 8/11, "proposal for installation of cable feed pipes..." no. 16/18/36/37 [combined] "proposal for construction of signal towers..." no. 18 (180th street and 239th street yards)" proposal for furnishing and erecting structural steel for inspection sheds..." no. 19/22, sect. 2. "proposal for erection of structural steel..." no. 49, sect. 1/2, "proposal for construction of concrete track floors and platforms ..." no. 49, sect. 3, "proposal for the supply of structural steel ..."
Resumo:
No. 36/37 & 50, sect. 1-3. Queensboro subway.--No. 39, sect. 2. Broadway - Fourth avenue (New Utrecht avenue) - No. 43, 4/38, see no. 4/38, 43.--No. 48, sect. 1-2. Seventh avenue - Lexington avenue.--No. 50. Queensboro subway (Hunters Point avenue station
Resumo:
We report the construction of the mouse full-length cDNA encyclopedia, the most extensive view of a complex transcriptome, on the basis of preparing and sequencing 246 libraries. Before cloning, cDNAs were enriched in full-length by Cap-Trapper, and in most cases, aggressively subtracted/normalized. We have produced 1,442,236 successful 3'-end sequences clustered into 171,144 groups, from which 60,770 clones were fully sequenced cDNAs annotated in the FANTOM-2 annotation. We have also produced 547,149 5' end reads, which clustered into 124,258 groups. Altogether, these cDNAs were further grouped in 70,000 transcriptional units (TU), which represent the best coverage of a transcriptome so far. By monitoring the extent of normalization/subtraction, we define the tentative equivalent coverage (TEC), which was estimated to be equivalent to >12,000,000 ESTs derived from standard libraries. High coverage explains discrepancies between the very large. numbers of clusters (and TUs) of this project, which also include non-protein-coding RNAs, and the lower gene number estimation of genome annotations. Altogether, S'-end clusters identify regions that are potential promoters for 8637 known genes and S'-end clusters suggest the presence of almost 63,000 transcriptional starting points. An estimate of the frequency of polyadenylation signals suggests that at least half of the singletons in the EST set represent real mRNAs. Clones accounting for about half of the predicted TUs await further sequencing. The continued high-discovery rate suggests that the task of transcriptome discovery is not yet complete.
Resumo:
The number of mammalian transcripts identified by full-length cDNA projects and genome sequencing projects is increasing remarkably. Clustering them into a strictly nonredundant and comprehensive set provides a platform for functional analysis of the transcriptome and proteome, but the quality of the clustering and predictive usefulness have previously required manual curation to identify truncated transcripts and inappropriate clustering of closely related sequences. A Representative Transcript and Protein Sets (RTPS) pipeline was previously designed to identify the nonredundant and comprehensive set of mouse transcripts based on clustering of a large mouse full-length cDNA set (FANTOM2). Here we propose an alternative method that is more robust, requires less manual curation, and is applicable to other organisms in addition to mouse. RTPSs of human, mouse, and rat have been produced by this method and used for validation. Their comprehensiveness and quality are discussed by comparison with other clustering approaches. The RTPSs are available at ftp://fantom2.gsc.riken.go.jp/RTPS/. (C). 2004 Elsevier Inc. All rights reserved.
Resumo:
Ontologies have become a key component in the Semantic Web and Knowledge management. One accepted goal is to construct ontologies from a domain specific set of texts. An ontology reflects the background knowledge used in writing and reading a text. However, a text is an act of knowledge maintenance, in that it re-enforces the background assumptions, alters links and associations in the ontology, and adds new concepts. This means that background knowledge is rarely expressed in a machine interpretable manner. When it is, it is usually in the conceptual boundaries of the domain, e.g. in textbooks or when ideas are borrowed into other domains. We argue that a partial solution to this lies in searching external resources such as specialized glossaries and the internet. We show that a random selection of concept pairs from the Gene Ontology do not occur in a relevant corpus of texts from the journal Nature. In contrast, a significant proportion can be found on the internet. Thus, we conclude that sources external to the domain corpus are necessary for the automatic construction of ontologies.
Resumo:
We have recently developed a principled approach to interactive non-linear hierarchical visualization [8] based on the Generative Topographic Mapping (GTM). Hierarchical plots are needed when a single visualization plot is not sufficient (e.g. when dealing with large quantities of data). In this paper we extend our system by giving the user a choice of initializing the child plots of the current plot in either interactive, or automatic mode. In the interactive mode the user interactively selects ``regions of interest'' as in [8], whereas in the automatic mode an unsupervised minimum message length (MML)-driven construction of a mixture of GTMs is used. The latter is particularly useful when the plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. We illustrate our approach on a data set of 2300 18-dimensional points and mention extension of our system to accommodate discrete data types.
Resumo:
This thesis reports the findings of three studies examining relationship status and identity construction in the talk of heterosexual women, from a feminist and social constructionist perspective. Semi-structured interviews were conducted with 12 women in study 1 and 13 women for study 2, between the ages of twenty and eighty-seven, discussing their experiences of relationships. All interviews were transcribed and analysed using discourse analysis, by hand and using the Nudist 6 program. The resulting themes create distinct age-related marital status expectations. Unmarried women were aware they had to marry by a ‘certain age’ or face a ‘lonely spinsterhood’. Through marriage women gained a socially accepted position associated with responsibility for others, self-sacrifice, a home-focused lifestyle and relational identification. Divorce was constructed as the consequence of personal faults and poor relationship care, reassuring the married of their own control over their status. Older unmarried women were constructed as deviant and pitiable, occupying social purgatory as a result of transgressing these valued conventions. Study 3 used repertory grid tasks, with 33 women, analysing transcripts and notes alongside numerical data using Web Grid II internet analysis tool, to produce principle components maps demonstrating the relationships between relationship terms and statuses. This study illuminated the consistency with which women of different ages and status saw marriage as their ideal living situation and outlined the domestic responsibilities associated. Spinsters and single-again women were defined primarily by their lack of marriage and by loneliness. This highlighted the devalued position of older unmarried women. The results of these studies indicated a consistent set of age-related expectations of relationship status, acknowledged by women and reinforced by their families and friends, which render many unmarried women deviant and fail to acknowledge the potential variety of women’s ways of living.
Resumo:
Many planning and control tools, especially network analysis, have been developed in the last four decades. The majority of them were created in military organization to solve the problem of planning and controlling research and development projects. The original version of the network model (i.e. C.P.M/PERT) was transplanted to the construction industry without the consideration of the special nature and environment of construction projects. It suited the purpose of setting up targets and defining objectives, but it failed in satisfying the requirement of detailed planning and control at the site level. Several analytical and heuristic rules based methods were designed and combined with the structure of C.P.M. to eliminate its deficiencies. None of them provides a complete solution to the problem of resource, time and cost control. VERT was designed to deal with new ventures. It is suitable for project evaluation at the development stage. CYCLONE, on the other hand, is concerned with the design and micro-analysis of the production process. This work introduces an extensive critical review of the available planning techniques and addresses the problem of planning for site operation and control. Based on the outline of the nature of site control, this research developed a simulation based network model which combines part of the logics of both VERT and CYCLONE. Several new nodes were designed to model the availability and flow of resources, the overhead and operating cost and special nodes for evaluating time and cost. A large software package is written to handle the input, the simulation process and the output of the model. This package is designed to be used on any microcomputer using MS-DOS operating system. Data from real life projects were used to demonstrate the capability of the technique. Finally, a set of conclusions are drawn regarding the features and limitations of the proposed model, and recommendations for future work are outlined at the end of this thesis.
Building up resilience of construction sector SMEs and their supply chains to extreme weather events
Resumo:
Wider scientific community now accept that the threat of climate change as real and thus acknowledge the importance of implementing adaptation measures in a global context. In the UK , the physical effects of climate change are likely to be directly felt in the form of extreme weather events, which are predicted to escalate in number and severity in future under the changing climatic conditions. Construction industry; which consists of supply chains running across various other industries, economies and regions, will also be affected due to these events. Thus, it is important that the construction organisations are well prepared to withstand the effects of extreme weather events not only directly affecting their organisations but also affecting their supply chains which in turn might affect the organisation concerned. Given the fact that more than 99% of construction sector businesses are SMEs, the area can benefit significantly from policy making to improve SME resilience and coping capacity. This paper presents the literature review and synthesis of a doctoral research study undertaken to address the issue of extreme weather resilience of construction sector SMEs and their supply chains. The main contribution of the paper to both academia and practitioners is a synthesis model that conceptualises the factors that enhances resilience of SMEs and their supply chains against extreme weather events. This synthesis model forms the basis of a decision making framework that will enable SMEs to both reduce their vulnerability and enhance their coping capacity against extreme weather. The value of this paper is further extended by the overall research design that is set forth as the way forward.