887 resultados para Set design
Resumo:
Three-dimensional printing (“3DP”) is an additive manufacturing technology that starts with a virtual 3D model of the object to be printed, the so-called Computer-Aided-Design (“CAD”) file. This file, when sent to the printer, gives instructions to the device on how to build the object layer-by-layer. This paper explores whether design protection is available under the current European regulatory framework for designs that are computer-created by means of CAD software, and, if so, under what circumstances. The key point is whether the appearance of a product, embedded in a CAD file, could be regarded as a protectable element under existing legislation. To this end, it begins with an inquiry into the concepts of “design” and “product”, set forth in Article 3 of the Community Design Regulation No. 6/2002 (“CDR”). Then, it considers the EUIPO’s practice of accepting 3D digital representations of designs. The enquiry goes on to illustrate the implications that the making of a CAD file available online might have. It suggests that the act of uploading a CAD file onto a 3D printing platform may be tantamount to a disclosure for the purposes of triggering unregistered design protection, and for appraising the state of the prior art. It also argues that, when measuring the individual character requirement, the notion of “informed user” and “the designer’s degree of freedom” may need to be reconsidered in the future. The following part touches on the exceptions to design protection, with a special focus on the repairs clause set forth in Article 110 CDR. The concluding part explores different measures that may be implemented to prohibit the unauthorised creation and sharing of CAD files embedding design-protected products.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
2016 is the outbreak year of the virtual reality industry. In the field of virtual reality, 3D surveying plays an important role. Nowadays, 3D surveying technology has received increasing attention. This project aims to establish and optimize a WebGL three-dimensional broadcast platform combined with streaming media technology. It takes streaming media server and panoramic video broadcast in browser as the application background. Simultaneously, it discusses about the architecture from streaming media server to panoramic media player and analyzing relevant theory problem. This paper focuses on the debugging of streaming media platform, the structure of WebGL player environment, different types of ball model analysis, and the 3D mapping technology. The main work contains the following points: Initially, relay on Easy Darwin open source streaming media server, built a streaming service platform. It can realize the transmission from RTSP stream to streaming media server, and forwards HLS slice video to clients; Then, wrote a WebGL panoramic video player based on Three.js lib with JQuery browser playback controls. Set up a HTML5 panoramic video player; Next, analyzed the latitude and longitude sphere model which from Three.js library according to WebGL rendering method. Pointed out the drawbacks of this model and the breakthrough point of improvement; After that, on the basis of Schneider transform principle, established the Schneider sphere projection model, and converted the output OBJ file to JS file for media player reading. Finally implemented real time panoramic video high precision playing without plugin; At last, I summarized the whole project. Put forward the direction of future optimization and extensible market.
Resumo:
In the current Cambodian higher education sector, there is little regulation of standards in curriculum design of undergraduate degrees in English language teacher education. The researcher, in the course of his professional work in the Curriculum and Policy Office at the Department of Higher Education, has seen evidence that most universities tend to copy their curriculum from one source, the curriculum of the Institute of Foreign Languages, the Royal University of Phnom Penh. Their programs fail to impose any entry standards, accepting students who pass the high school exam without any entrance examination. It is possible for a student to enter university with satisfactory scores in all subjects but English. Therefore, not many graduates are able to fulfil the professional requirements of the roles they are supposed to take. Neau (2010) claims that many Cambodian EFL teachers do not reach a high performance standard due to their low English language proficiency and poor background in teacher education. The main purpose of this study is to establish key guidelines for developing curricula for English language teacher education for all the universities across the country. It examines the content of the Bachelor‘s degree of Education in Teaching English as a Foreign Language (B Ed in TEFL) and Bachelor‘s degree of Arts in Teaching English to Speakers of Other Languages (BA in TESOL) curricula adopted in Cambodian universities on the basis of criteria proposed in current curriculum research. It also investigates the perspectives of Cambodian EFL teachers on the areas of knowledge and skill they need in order to perform their English teaching duties in Cambodia today. The areas of knowledge and skill offered in the current curricula at Cambodian higher education institutions (HEIs), the framework of the knowledge base for EFL teacher education and general higher education, and the areas of knowledge and skill Cambodian EFL teachers perceive to be important, are compared so as to identify any gaps in the current English language teacher education curricula in the Cambodian HEIs. The existence of gaps show what domains of knowledge and skill need to be included in the English language teacher education curricula at Cambodian HEIs. These domains are those identified by previous curriculum researchers in both general and English language teacher education at tertiary level. Therefore, the present study provides useful insights into the importance of including appropriate content in English language teacher education curricula. Mixed methods are employed in this study. The course syllabi and the descriptions within the curricula in five Cambodian HEIs are analysed qualitatively based on the framework of knowledge and skills for EFL teachers, which is formed by looking at the knowledge base for second language teachers suggested by the methodologists and curriculum specialists whose work is elaborated on the review of literature. A quantitative method is applied to analyse the perspectives of 120 Cambodian EFL teachers on areas of knowledge and skills they should possess. The fieldwork was conducted between June and August, 2014. The analysis reveals that the following areas are included in the curricula at the five universities: communication skills, general knowledge, knowledge of teaching theories, teaching skills, pedagogical reasoning and decision making skills, subject matter knowledge, contextual knowledge, cognitive abilities, and knowledge of social issues. Additionally, research skills are included in three curricula while society and community involvement is in only one. Further, information and communication technology, which is outlined in the Education Strategies Plan (2006-2010), forms part of four curricula while leadership skills form part of two. This study demonstrates ultimately that most domains that are directly and indirectly related to language teaching competence are not sufficiently represented in the current curricula. On the basis of its findings, the study concludes with a set of guidelines that should inform the design and development of TESOL and TEFL curricula in Cambodia.
Resumo:
O presente projeto de investigação, propõe compreender a cultura visual contemporânea do design do selo postal português, de 2001 a 2013, considerando os avanços e os novos meios tecnológicos digitais. Para isso, partiu-se da história do selo postal como objeto de testemunho histórico, cultural e visual repleto de valores duradouros e possuidores de uma linguagem gráfica distinta. A esse conjunto de caraterísticas que determinam o selo postal, as mesmas, representam um grau de importância associados à história e à cultura de um país, evidenciando uma linguagem característica da vida social e da época que se encontra, que ao longo dos anos possuem valores de importância e sentimento para a humanidade. Primeiramente, procedeu-se a uma pesquisa exaustiva sobre os principais designers e ateliers portugueses, quer realizada em livros de coleções dos CTT, quer na internet em motores de busca. Seguidamente, realizou-se uma análise geral sobre a sintaxe da linguagem visual, baseada no trabalho de Donis A. Dondis (2003), declinada sobre a comunicação visual no design, para este caso, nos selos postais. Partindo dessa análise, posteriormente, efetuou-se a criação de uma emissão de selos postais juntamente com a tecnologia da realidade aumentada, tendo como base a temática centrada na cidade do Porto e que pode ser repercutida em diferentes cidades do país “Uma visita portuguesa com certeza”. Com este projeto conseguiu-se demonstrar e implementar uma tecnologia digital a um artefacto de cariz físico, usualmente, apresentado em suporte papel. O estudo pretende assim, contribuir para a inovação ao nível do design filatélico e histórico do selo postal português, tendo como base o uso dessa tecnologia.
Resumo:
This thesis introduces the L1 Adaptive Control Toolbox, a set of tools implemented in Matlab that aid in the design process of an L1 adaptive controller and enable the user to construct simulations of the closed-loop system to verify its performance. Following a brief review of the existing theory on L1 adaptive controllers, the interface of the toolbox is presented, including a description of the functions accessible to the user. Two novel algorithms for determining the required sampling period of a piecewise constant adaptive law are presented and their implementation in the toolbox is discussed. The detailed description of the structure of the toolbox is provided as well as a discussion of the implementation of the creation of simulations. Finally, the graphical user interface is presented and described in detail, including the graphical design tools provided for the development of the filter C(s). The thesis closes with suggestions for further improvement of the toolbox.
Resumo:
Robotics is an emergent branch of engineering that involves the conception, manufacture, and control of robots. It is a multidisciplinary field that combines electronics, design, computer science, artificial intelligence, mechanics and nanotechnology. Its evolution results in machines that are able to perform tasks with some level of complexity. Multi-agent systems is a researching topic within robotics, thus they allow the solving of higher complexity problems, through the execution of simple routines. Robotic soccer allows the study and development of robotics and multiagent systems, as the agents have to work together as a team, having in consideration most problems found in our quotidian, as for example adaptation to a highly dynamic environment as it is the one of a soccer game. CAMBADA is the robotic soccer team belonging to the group of research IRIS from IEETA, composed by teachers, researchers and students of the University of Aveiro, which annually has as main objective the participation in the RoboCup, in the Middle Size League. The purpose of this work is to improve the coordination in set pieces situations. This thesis introduces a new behavior and the adaptation of the already existing ones in the offensive situation, as well as the proposal of a new positioning method in defensive situations. The developed work was incorporated within the competition software of the robots. Which allows the presentation, in this dissertation, of the experimental results obtained, through simulation software as well as through the physical robots on the laboratory.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
Resumo:
Due to trends in aero-design, aeroelasticity becomes increasingly important in modern turbomachines. Design requirements of turbomachines lead to the development of high aspect ratio blades and blade integral disc designs (blisks), which are especially prone to complex modes of vibration. Therefore, experimental investigations yielding high quality data are required for improving the understanding of aeroelastic effects in turbomachines. One possibility to achieve high quality data is to excite and measure blade vibrations in turbomachines. The major requirement for blade excitation and blade vibration measurements is to minimize interference with the aeroelastic effects to be investigated. Thus in this paper, a non-contact-and thus low interference-experimental set-up for exciting and measuring blade vibrations is proposed and shown to work. A novel acoustic system excites rotor blade vibrations, which are measured with an optical tip-timing system. By performing measurements in an axial compressor, the potential of the acoustic excitation method for investigating aeroelastic effects is explored. The basic principle of this method is described and proven through the analysis of blade responses at different acoustic excitation frequencies and at different rotational speeds. To verify the accuracy of the tip-timing system, amplitudes measured by tip-timing are compared with strain gage measurements. They are found to agree well. Two approaches to vary the nodal diameter (ND) of the excited vibration mode by controlling the acoustic excitation are presented. By combining the different excitable acoustic modes with a phase-lag control, each ND of the investigated 30 blade rotor can be excited individually. This feature of the present acoustic excitation system is of great benefit to aeroelastic investigations and represents one of the main advantages over other excitation methods proposed in the past. In future studies, the acoustic excitation method will be used to investigate aeroelastic effects in high-speed turbomachines in detail. The results of these investigations are to be used to improve the aeroelastic design of modern turbomachines.
Resumo:
Considerable interest in renewable energy has increased in recent years due to the concerns raised over the environmental impact of conventional energy sources and their price volatility. In particular, wind power has enjoyed a dramatic global growth in installed capacity over the past few decades. Nowadays, the advancement of wind turbine industry represents a challenge for several engineering areas, including materials science, computer science, aerodynamics, analytical design and analysis methods, testing and monitoring, and power electronics. In particular, the technological improvement of wind turbines is currently tied to the use of advanced design methodologies, allowing the designers to develop new and more efficient design concepts. Integrating mathematical optimization techniques into the multidisciplinary design of wind turbines constitutes a promising way to enhance the profitability of these devices. In the literature, wind turbine design optimization is typically performed deterministically. Deterministic optimizations do not consider any degree of randomness affecting the inputs of the system under consideration, and result, therefore, in an unique set of outputs. However, given the stochastic nature of the wind and the uncertainties associated, for instance, with wind turbine operating conditions or geometric tolerances, deterministically optimized designs may be inefficient. Therefore, one of the ways to further improve the design of modern wind turbines is to take into account the aforementioned sources of uncertainty in the optimization process, achieving robust configurations with minimal performance sensitivity to factors causing variability. The research work presented in this thesis deals with the development of a novel integrated multidisciplinary design framework for the robust aeroservoelastic design optimization of multi-megawatt horizontal axis wind turbine (HAWT) rotors, accounting for the stochastic variability related to the input variables. The design system is based on a multidisciplinary analysis module integrating several simulations tools needed to characterize the aeroservoelastic behavior of wind turbines, and determine their economical performance by means of the levelized cost of energy (LCOE). The reported design framework is portable and modular in that any of its analysis modules can be replaced with counterparts of user-selected fidelity. The presented technology is applied to the design of a 5-MW HAWT rotor to be used at sites of wind power density class from 3 to 7, where the mean wind speed at 50 m above the ground ranges from 6.4 to 11.9 m/s. Assuming the mean wind speed to vary stochastically in such range, the rotor design is optimized by minimizing the mean and standard deviation of the LCOE. Airfoil shapes, spanwise distributions of blade chord and twist, internal structural layup and rotor speed are optimized concurrently, subject to an extensive set of structural and aeroelastic constraints. The effectiveness of the multidisciplinary and robust design framework is demonstrated by showing that the probabilistically designed turbine achieves more favorable probabilistic performance than those of the initial baseline turbine and a turbine designed deterministically.
Resumo:
Chapter 1: Under the average common value function, we select almost uniquely the mechanism that gives the seller the largest portion of the true value in the worst situation among all the direct mechanisms that are feasible, ex-post implementable and individually rational. Chapter 2: Strategy-proof, budget balanced, anonymous, envy-free linear mechanisms assign p identical objects to n agents. The efficiency loss is the largest ratio of surplus loss to efficient surplus, over all profiles of non-negative valuations. The smallest efficiency loss is uniquely achieved by the following simple allocation rule: assigns one object to each of the p−1 agents with the highest valuation, a large probability to the agent with the pth highest valuation, and the remaining probability to the agent with the (p+1)th highest valuation. When “envy freeness” is replaced by the weaker condition “voluntary participation”, the optimal mechanism differs only when p is much less than n. Chapter 3: One group is to be selected among a set of agents. Agents have preferences over the size of the group if they are selected; and preferences over size as well as the “stand-outside” option are single-peaked. We take a mechanism design approach and search for group selection mechanisms that are efficient, strategy-proof and individually rational. Two classes of such mechanisms are presented. The proposing mechanism allows agents to either maintain or shrink the group size following a fixed priority, and is characterized by group strategy-proofness. The voting mechanism enlarges the group size in each voting round, and achieves at least half of the maximum group size compatible with individual rationality.
Resumo:
The main aim of the research project "On the Contribution of Schools to Children's Overall Indoor Air Exposure" is to study associations between adverse health effects, namely, allergy, asthma, and respiratory symptoms, and indoor air pollutants to which children are exposed to in primary schools and homes. Specifically, this investigation reports on the design of the study and methods used for data collection within the research project and discusses factors that need to be considered when designing such a study. Further, preliminary findings concerning descriptors of selected characteristics in schools and homes, the study population, and clinical examination are presented. The research project was designed in two phases. In the first phase, 20 public primary schools were selected and a detailed inspection and indoor air quality (IAQ) measurements including volatile organic compounds (VOC), aldehydes, particulate matter (PM2.5, PM10), carbon dioxide (CO2), carbon monoxide (CO), bacteria, fungi, temperature, and relative humidity were conducted. A questionnaire survey of 1600 children of ages 8-9 years was undertaken and a lung function test, exhaled nitric oxide (eNO), and tear film stability testing were performed. The questionnaire focused on children's health and on the environment in their school and homes. One thousand and ninety-nine questionnaires were returned. In the second phase, a subsample of 68 children was enrolled for further studies, including a walk-through inspection and checklist and an extensive set of IAQ measurements in their homes. The acquired data are relevant to assess children's environmental exposures and health status.
Resumo:
The following thesis navigates the primary artistic concept, design process and execution of Marchlena Rodgers’ costume design for the University of Maryland’s production of Intimate Apparel. Intimate Apparel opened October 9, 2015 in the University of Maryland’s Kay Theatre. The piece was written by Lynn Nottage directed by Jennifer Nelson. The set was designed by Lydia Francis, Lighting was designed by Max Doolittle.
Resumo:
The following thesis documents the design process and execution of Tyler Gunther’s costume design for the Theatre, Dance and Performance Studies’ production of Tartuffe. The production opened November 6, 2015 in the University of Maryland’s Kogod Theater. It was directed by Lee Mikeska Gardner with the set designed by Halea Coulter and lighting designed by Connor Dreibelbis.
Resumo:
The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.