954 resultados para Design problems
Resumo:
This paper proposes and investigates a metaheuristic tabu search algorithm (TSA) that generates optimal or near optimal solutions sequences for the feedback length minimization problem (FLMP) associated to a design structure matrix (DSM). The FLMP is a non-linear combinatorial optimization problem, belonging to the NP-hard class, and therefore finding an exact optimal solution is very hard and time consuming, especially on medium and large problem instances. First, we introduce the subject and provide a review of the related literature and problem definitions. Using the tabu search method (TSM) paradigm, this paper presents a new tabu search algorithm that generates optimal or sub-optimal solutions for the feedback length minimization problem, using two different neighborhoods based on swaps of two activities and shifting an activity to a different position. Furthermore, this paper includes numerical results for analyzing the performance of the proposed TSA and for fixing the proper values of its parameters. Then we compare our results on benchmarked problems with those already published in the literature. We conclude that the proposed tabu search algorithm is very promising because it outperforms the existing methods, and because no other tabu search method for the FLMP is reported in the literature. The proposed tabu search algorithm applied to the process layer of the multidimensional design structure matrices proves to be a key optimization method for an optimal product development.
Resumo:
n decentralised rural electrification through solar home systems, private companies and promoting institutions are faced with the problem of deploying maintenance structures to operate and guarantee the service of the solar systems for long periods (ten years or more). The problems linked to decentralisation, such as the dispersion of dwellings, difficult access and maintenance needs, makes it an arduous task. This paper proposes an innovative design tool created ad hoc for photovoltaic rural electrification based on a real photovoltaic rural electrification program in Morocco as a special case study. The tool is developed from a mathematical model comprising a set of decision variables (location, transport, etc.) that must meet certain constraints and whose optimisation criterion is the minimum cost of the operation and maintenance activity assuming an established quality of service. The main output of the model is the overall cost of the maintenance structure. The best location for the local maintenance headquarters and warehouses in a given region is established, as are the number of maintenance technicians and vehicles required.
Resumo:
Research in human computer interaction (HCI) covers both technological and human behavioural concerns. As a consequence, the contributions made in HCI research tend to be aware to either engineering or the social sciences. In HCI the purpose of practical research contributions is to reveal unknown insights about human behaviour and its relationship to technology. Practical research methods normally used in HCI include formal experiments, field experiments, field studies, interviews, focus groups, surveys, usability tests, case studies, diary studies, ethnography, contextual inquiry, experience sampling, and automated data collection. In this paper, we report on our experience using the evaluation methods focus groups, surveys and interviews and how we adopted these methods to develop artefacts: either interface’s design or information and technological systems. Four projects are examples of the different methods application to gather information about user’s wants, habits, practices, concerns and preferences. The goal was to build an understanding of the attitudes and satisfaction of the people who might interact with a technological artefact or information system. Conversely, we intended to design for information systems and technological applications, to promote resilience in organisations (a set of routines that allow to recover from obstacles) and user’s experiences. Organisations can here also be viewed within a system approach, which means that the system perturbations even failures could be characterized and improved. The term resilience has been applied to everything from the real estate, to the economy, sports, events, business, psychology, and more. In this study, we highlight that resilience is also made up of a number of different skills and abilities (self-awareness, creating meaning from other experiences, self-efficacy, optimism, and building strong relationships) that are a few foundational ingredients, which people should use along with the process of enhancing an organisation’s resilience. Resilience enhances knowledge of resources available to people confronting existing problems.
Resumo:
This document presents an Enterprise Application Integration based proposal for research outcomes and technological information management. The proposal addresses national and international science and research outcomes information management, and corresponding information systems. Information systems interoperability problems, approaches, technologies and integration tools are presented and applied to the research outcomes information management case. A business and technological perspective is provided, including the conceptual analysis and modelling, an integration solution based in a Domain-Specific Language (DSL) and the integration platform to execute the proposed solution. For illustrative purposes, the role and information system needs of a research unit is assumed as the representative case.
Resumo:
In knowledge technology work, as expressed by the scope of this conference, there are a number of communities, each uncovering new methods, theories, and practices. The Library and Information Science (LIS) community is one such community. This community, through tradition and innovation, theories and practice, organizes knowledge and develops knowledge technologies formed by iterative research hewn to the values of equal access and discovery for all. The Information Modeling community is another contributor to knowledge technologies. It concerns itself with the construction of symbolic models that capture the meaning of information and organize it in ways that are computer-based, but human understandable. A recent paper that examines certain assumptions in information modeling builds a bridge between these two communities, offering a forum for a discussion on common aims from a common perspective. In a June 2000 article, Parsons and Wand separate classes from instances in information modeling in order to free instances from what they call the “tyranny” of classes. They attribute a number of problems in information modeling to inherent classification – or the disregard for the fact that instances can be conceptualized independent of any class assignment. By faceting instances from classes, Parsons and Wand strike a sonorous chord with classification theory as understood in LIS. In the practice community and in the publications of LIS, faceted classification has shifted the paradigm of knowledge organization theory in the twentieth century. Here, with the proposal of inherent classification and the resulting layered information modeling, a clear line joins both the LIS classification theory community and the information modeling community. Both communities have their eyes turned toward networked resource discovery, and with this conceptual conjunction a new paradigmatic conversation can take place. Parsons and Wand propose that the layered information model can facilitate schema integration, schema evolution, and interoperability. These three spheres in information modeling have their own connotation, but are not distant from the aims of classification research in LIS. In this new conceptual conjunction, established by Parsons and Ward, information modeling through the layered information model, can expand the horizons of classification theory beyond LIS, promoting a cross-fertilization of ideas on the interoperability of subject access tools like classification schemes, thesauri, taxonomies, and ontologies. This paper examines the common ground between the layered information model and faceted classification, establishing a vocabulary and outlining some common principles. It then turns to the issue of schema and the horizons of conventional classification and the differences between Information Modeling and Library and Information Science. Finally, a framework is proposed that deploys an interpretation of the layered information modeling approach in a knowledge technologies context. In order to design subject access systems that will integrate, evolve and interoperate in a networked environment, knowledge organization specialists must consider a semantic class independence like Parsons and Wand propose for information modeling.
Resumo:
The relationship between career counseling and psychotherapy is not a new subject. The debate allows the affirmation of career counseling as a dimension of personal counseling and recognizes the close relationship between psychosocial and career issues (Blustein & Spengler, 1995). The connection between these two approaches paves the way for the integration of career counseling with psychotherapy. Indeed, the inseparability of mental health and career issues frequently leads psychotherapists to help their clients to deal with work satisfaction, underemployment or unemployment through psychotherapy. Moreover, when working with specific populations (e.g., people with intellectual disabilities and people with addiction or mental health problems), psychotherapy calls for occupational integration to consolidate and enhance therapeutic gains (Blustein, 1987; Jordan & Kahnweiler, 1995; Leff & Warner, 2006).
Resumo:
The cyclization of pseudoionone yields a mixture of alpha-ionone, beta-ionone and gamma-ionone. By careful control of reagent and reaction conditions, either the alpha- and beta- isomer can be favoured. The alpha-ionone has violet odour and is widely used in perfumery and flavours. beta-Ionone is the main precursor of Vitamin A and beta-carotene. Traditionally, strong homogeneous catalysts, like sulphuric acid and phosphoric acid have been used. These problems can be overcome by the use of solid acid catalysts. This work reports the cyclization of pseudoionone over USY zeolites, at 80ºC. USY It is observed that the initial activity increases with the Si/Al ratio of zeolite until a maximum, which is obtained with USY3. With higher Si/Al ratio, a decrease in the catalytic activity is observed. Selectivity to ionone isomers is around 42 %, at 75% of pseudoionone conversion, after 24 h of reaction. USY3 zeolite was reused four times with the same catalyst sample in the same condicions. It was observed a stabilization of the catalytic activity, after the second use.
Resumo:
The interdisciplinary relationship between industrial design and mechanical engineering is sensitive. This research focuses on understanding how one can positively mediate this relation, in order to foster innovation. In this paper, technology is considered for this role since it has, in some historical moments, served as an integrator of these two disciplines, in processes that led to innovation. By means of an extensive literature review, covering three different periods of technological development, both disciplines’ positioning in society and their link with technology are analyzed and compared. The three case studies selected help to illustrate, precisely, the technology positioning between both disciplines and society. Literature assumes that industrial design is rooted in the rise of criticism against both the machine and the mechanized production. This is an opposing approach to the current paradigm, in which design plays a fundamental role in adapting technology to society. Also, the social problems caused by the mechanized and massive production triggered the mechanical engineering emergence, as a professionalized discipline. Technology was intrinsically connected with both industrial design and mechanical engineering emergence and subsequent evolution. In the technology conflict with society lays the reform and regulation for design practice, in its broadest sense.
Resumo:
Analytics is the technology working with the manipulation of data to produce information able to change the world we live every day. Analytics have been largely used within the last decade to cluster people’s behaviour to predict their preferences of items to buy, music to listen, movies to watch and even electoral preference. The most advanced companies succeded in controlling people’s behaviour using analytics. Despite the evidence of the super-power of analytics, they are rarely applied to the big data collected within supply chain systems (i.e. distribution network, storage systems and production plants). This PhD thesis explores the fourth research paradigm (i.e. the generation of knowledge from data) applied to supply chain system design and operations management. An ontology defining the entities and the metrics of supply chain systems is used to design data structures for data collection in supply chain systems. The consistency of this data is provided by mathematical demonstrations inspired by the factory physics theory. The availability, quantity and quality of the data within these data structures define different decision patterns. Ten decision patterns are identified, and validated on-field, to address ten different class of design and control problems in the field of supply chain systems research.
Resumo:
Knowledge graphs (KGs) and ontologies have been widely adopted for modelling numerous domains. However, understanding the content of an ontology/KG is far from straightforward: existing methods partially address this issue. This thesis is based on the assumption that identifying the Ontology Design Patterns (ODPs) in an ontology or a KG contributes to address this problem. Most times, the reused ODPs are not explicitly annotated, or their reuse is unintentional. Therefore, there is a challenge to automatically identify ODPs in existing ontologies and KGs, which is the main focus of this research work. This thesis analyses the role of ODPs in ontology engineering, through experiences in actual ontology projects, placing this analysis in the context of existing ontology reuse approaches. Moreover, this thesis introduces a novel method for extracting empirical ODPs (EODPs) from ontologies, and a novel method for extracting EODPs from knowledge graphs, whose schemas are implicit. The first method groups the extracted EODPs in clusters: conceptual components. Each conceptual component represents a modelling problem, e.g. representing collections. As EODPs are fragments possibly extracted from different ontologies, some of them will fall in the same cluster, meaning that they are implemented solutions to the same modelling problem. EODPs and conceptual components enable the empirical observation and comparison of modelling solutions to common modelling problems in different ontologies. The second method extracts EODPs from a KG as sets of probabilistic axioms/constraints involving the ontological entities instantiated. These EODPs may support KG inspection and comparison, providing insights on how certain entities are described in a KG. An additional contribution of this thesis is an ontology for annotating ODPs in ontologies and KGs.
Resumo:
This thesis project studies the agent identity privacy problem in the scalar linear quadratic Gaussian (LQG) control system. For the agent identity privacy problem in the LQG control, privacy models and privacy measures have to be established first. It depends on a trajectory of correlated data rather than a single observation. I propose here privacy models and the corresponding privacy measures by taking into account the two characteristics. The agent identity is a binary hypothesis: Agent A or Agent B. An eavesdropper is assumed to make a hypothesis testing on the agent identity based on the intercepted environment state sequence. The privacy risk is measured by the Kullback-Leibler divergence between the probability distributions of state sequences under two hypotheses. By taking into account both the accumulative control reward and privacy risk, an optimization problem of the policy of Agent B is formulated. The optimal deterministic privacy-preserving LQG policy of Agent B is a linear mapping. A sufficient condition is given to guarantee that the optimal deterministic privacy-preserving policy is time-invariant in the asymptotic regime. An independent Gaussian random variable cannot improve the performance of Agent B. The numerical experiments justify the theoretic results and illustrate the reward-privacy trade-off. Based on the privacy model and the LQG control model, I have formulated the mathematical problems for the agent identity privacy problem in LQG. The formulated problems address the two design objectives: to maximize the control reward and to minimize the privacy risk. I have conducted theoretic analysis on the LQG control policy in the agent identity privacy problem and the trade-off between the control reward and the privacy risk.Finally, the theoretic results are justified by numerical experiments. From the numerical results, I expected to have some interesting observations and insights, which are explained in the last chapter.
Resumo:
Driving simulators emulate a real vehicle drive in a virtual environment. One of the most challenging problems in this field is to create a simulated drive as real as possible to deceive the driver's senses and cause the believing to be in a real vehicle. This thesis first provides an overview of the Stuttgart driving simulator with a description of the overall system, followed by a theoretical presentation of the commonly used motion cueing algorithms. The second and predominant part of the work presents the implementation of the classical and optimal washout algorithms in a Simulink environment. The project aims to create a new optimal washout algorithm and compare the obtained results with the results of the classical washout. The classical washout algorithm, already implemented in the Stuttgart driving simulator, is the most used in the motion control of the simulator. This classical algorithm is based on a sequence of filters in which each parameter has a clear physical meaning and a unique assignment to a single degree of freedom. However, the effects on human perception are not exploited, and each parameter must be tuned online by an engineer in the control room, depending on the driver's feeling. To overcome this problem and also consider the driver's sensations, the optimal washout motion cueing algorithm was implemented. This optimal control-base algorithm treats motion cueing as a tracking problem, forcing the accelerations perceived in the simulator to track the accelerations that would have been perceived in a real vehicle, by minimizing the perception error within the constraints of the motion platform. The last chapter presents a comparison between the two algorithms, based on the driver's feelings after the test drive. Firstly it was implemented an off-line test with a step signal as an input acceleration to verify the behaviour of the simulator. Secondly, the algorithms were executed in the simulator during a test drive on several tracks.
Resumo:
All structures are subjected to various loading conditions and combinations. For offshore structures, these loads include permanent loads, hydrostatic pressure, wave, current, and wind loads. Typically, sea environments in different geographical regions are characterized by the 100-year wave height, surface currents, and velocity speeds. The main problems associated with the commonly used, deterministic method is the fact that not all waves have the same period, and that the actual stochastic nature of the marine environment is not taken into account. Offshore steel structure fatigue design is done using the DNVGL-RP-0005:2016 standard which takes precedence over the DNV-RP-C203 standard (2012). Fatigue analysis is necessary for oil and gas producing offshore steel structures which were first constructed in the Gulf of Mexico North Sea (the 1930s) and later in the North Sea (1960s). Fatigue strength is commonly described by S-N curves which have been obtained by laboratory experiments. The rapid development of the Offshore wind industry has caused the exploration into deeper ocean areas and the adoption of new support structural concepts such as full lattice tower systems amongst others. The optimal design of offshore wind support structures including foundation, turbine towers, and transition piece components putting into consideration, economy, safety, and even the environment is a critical challenge. In this study, fatigue design challenges of transition pieces from decommissioned platforms for offshore wind energy are proposed to be discussed. The fatigue resistance of the material and structural components under uniaxial and multiaxial loading is introduced with the new fatigue design rules whilst considering the combination of global and local modeling using finite element analysis software programs.
Resumo:
Hybrid bioisoster derivatives from N-acylhydrazones and furoxan groups were designed with the objective of obtaining at least a dual mechanism of action: cruzain inhibition and nitric oxide (NO) releasing activity. Fifteen designed compounds were synthesized varying the substitution in N-acylhydrazone and in furoxan group as well. They had its anti-Trypanosoma cruzi activity in amastigotes forms, NO releasing potential and inhibitory cruzain activity evaluated. The two most active compounds (6, 14) both in the parasite amastigotes and in the enzyme contain the nitro group in para position of the aromatic ring. The permeability screening in Caco-2 cell and cytotoxicity assay in human cells were performed for those most active compounds and both showed to be less cytotoxic than the reference drug, benznidazole. Compound 6 was the most promising, since besides activity it showed good permeability and selectivity index, higher than the reference drug. Thereby the compound 6 was considered as a possible candidate for additional studies.
Resumo:
Split-plot design (SPD) and near-infrared chemical imaging were used to study the homogeneity of the drug paracetamol loaded in films and prepared from mixtures of the biocompatible polymers hydroxypropyl methylcellulose, polyvinylpyrrolidone, and polyethyleneglycol. The study was split into two parts: a partial least-squares (PLS) model was developed for a pixel-to-pixel quantification of the drug loaded into films. Afterwards, a SPD was developed to study the influence of the polymeric composition of films and the two process conditions related to their preparation (percentage of the drug in the formulations and curing temperature) on the homogeneity of the drug dispersed in the polymeric matrix. Chemical images of each formulation of the SPD were obtained by pixel-to-pixel predictions of the drug using the PLS model of the first part, and macropixel analyses were performed for each image to obtain the y-responses (homogeneity parameter). The design was modeled using PLS regression, allowing only the most relevant factors to remain in the final model. The interpretation of the SPD was enhanced by utilizing the orthogonal PLS algorithm, where the y-orthogonal variations in the design were separated from the y-correlated variation.