884 resultados para High-rise apartment buildings - Design and construction - China
Resumo:
This paper carries the rather weighty title of "Evolution of Design Practice at the Iowa State Highway Commission for the Determination of Peak Discharges at .Bridges and Culverts." Hopefully, this evolving process will lead to a more precise definition of a peak rate of runoff for a selected recurrence interval at a particular site. In this paper the author will relate where the Highway Commission has been, is now, and will be going in this art of hydrology. He will then offer some examples at a few sites in Iowa to illustrate the use of the various methods. Finally, he will look ahead to some of the pitfalls still lying in wait for us.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
SD card (Secure Digital Memory Card) is widely used in portable storage medium. Currently, latest researches on SD card, are mainly SD card controller based on FPGA (Field Programmable Gate Array). Most of them are relying on API interface (Application Programming Interface), AHB bus (Advanced High performance Bus), etc. They are dedicated to the realization of ultra high speed communication between SD card and upper systems. Studies about SD card controller, really play a vital role in the field of high speed cameras and other sub-areas of expertise. This design of FPGA-based file systems and SD2.0 IP (Intellectual Property core) does not only exhibit a nice transmission rate, but also achieve the systematic management of files, while retaining a strong portability and practicality. The file system design and implementation on a SD card covers the main three IP innovation points. First, the combination and integration of file system and SD card controller, makes the overall system highly integrated and practical. The popular SD2.0 protocol is implemented for communication channels. Pure digital logic design based on VHDL (Very-High-Speed Integrated Circuit Hardware Description Language), integrates the SD card controller in hardware layer and the FAT32 file system for the entire system. Secondly, the document management system mechanism makes document processing more convenient and easy. Especially for small files in batch processing, it can ease the pressure of upper system to frequently access and process them, thereby enhancing the overall efficiency of systems. Finally, digital design ensures the superior performance. For transmission security, CRC (Cyclic Redundancy Check) algorithm is for data transmission protection. Design of each module is platform-independent of macro cells, and keeps a better portability. Custom integrated instructions and interfaces may facilitate easily to use. Finally, the actual test went through multi-platform method, Xilinx and Altera FPGA developing platforms. The timing simulation and debugging of each module was covered. Finally, Test results show that the designed FPGA-based file system IP on SD card can support SD card, TF card and Micro SD with 2.0 protocols, and the successful implementation of systematic management for stored files, and supports SD bus mode. Data read and write rates in Kingston class10 card is approximately 24.27MB/s and 16.94MB/s.
Resumo:
In the current Cambodian higher education sector, there is little regulation of standards in curriculum design of undergraduate degrees in English language teacher education. The researcher, in the course of his professional work in the Curriculum and Policy Office at the Department of Higher Education, has seen evidence that most universities tend to copy their curriculum from one source, the curriculum of the Institute of Foreign Languages, the Royal University of Phnom Penh. Their programs fail to impose any entry standards, accepting students who pass the high school exam without any entrance examination. It is possible for a student to enter university with satisfactory scores in all subjects but English. Therefore, not many graduates are able to fulfil the professional requirements of the roles they are supposed to take. Neau (2010) claims that many Cambodian EFL teachers do not reach a high performance standard due to their low English language proficiency and poor background in teacher education. The main purpose of this study is to establish key guidelines for developing curricula for English language teacher education for all the universities across the country. It examines the content of the Bachelor‘s degree of Education in Teaching English as a Foreign Language (B Ed in TEFL) and Bachelor‘s degree of Arts in Teaching English to Speakers of Other Languages (BA in TESOL) curricula adopted in Cambodian universities on the basis of criteria proposed in current curriculum research. It also investigates the perspectives of Cambodian EFL teachers on the areas of knowledge and skill they need in order to perform their English teaching duties in Cambodia today. The areas of knowledge and skill offered in the current curricula at Cambodian higher education institutions (HEIs), the framework of the knowledge base for EFL teacher education and general higher education, and the areas of knowledge and skill Cambodian EFL teachers perceive to be important, are compared so as to identify any gaps in the current English language teacher education curricula in the Cambodian HEIs. The existence of gaps show what domains of knowledge and skill need to be included in the English language teacher education curricula at Cambodian HEIs. These domains are those identified by previous curriculum researchers in both general and English language teacher education at tertiary level. Therefore, the present study provides useful insights into the importance of including appropriate content in English language teacher education curricula. Mixed methods are employed in this study. The course syllabi and the descriptions within the curricula in five Cambodian HEIs are analysed qualitatively based on the framework of knowledge and skills for EFL teachers, which is formed by looking at the knowledge base for second language teachers suggested by the methodologists and curriculum specialists whose work is elaborated on the review of literature. A quantitative method is applied to analyse the perspectives of 120 Cambodian EFL teachers on areas of knowledge and skills they should possess. The fieldwork was conducted between June and August, 2014. The analysis reveals that the following areas are included in the curricula at the five universities: communication skills, general knowledge, knowledge of teaching theories, teaching skills, pedagogical reasoning and decision making skills, subject matter knowledge, contextual knowledge, cognitive abilities, and knowledge of social issues. Additionally, research skills are included in three curricula while society and community involvement is in only one. Further, information and communication technology, which is outlined in the Education Strategies Plan (2006-2010), forms part of four curricula while leadership skills form part of two. This study demonstrates ultimately that most domains that are directly and indirectly related to language teaching competence are not sufficiently represented in the current curricula. On the basis of its findings, the study concludes with a set of guidelines that should inform the design and development of TESOL and TEFL curricula in Cambodia.
Resumo:
This article presents the methodology and main results obtained in Spain within the FORMAR project, a European-funded project under the Leonardo Da Vinci scheme (Lifelong Learning Programme), whose main goal is to jointly develop training resources and modules to improve the skills on sustainability issues of buildings maintenance and refurbishment workers, in three different European countries: Spain, Portugal (Project Coordinator) and France. The Units of Short-term Training (UST) developed within this project are focused on the VET of carpenters, painters, bricklayers, building technicians and installers of solar panels, and a transversal unit containing basic concepts on sustainable construction and nearly Zero Energy Buildings (n-ZEB) is also developed. In parallel, clients’ guides for the aforementioned professionals are also implemented to improve the information provided to clients and owners in order to support the procurement decisions regarding building products and materials. Therefore, the project provides an opportunity to exchange experiences between organizations of these three European countries, as the UST will be developed simultaneously in each of them, exploring opportunities for training, guidance and exchange of experience. Even though the UST will have a common structure and contents, they will be slightly different in each country to adapt them to the different specific training needs and regulations of Spain, Portugal and France. This paper details, as a case study, the development process of the UST for carpenters and building technicians in Spain, including the analysis of needs and existing training materials, the main contents developed and the evaluation and testing process of the UST, which involves the active participation of several stakeholders of this sector as well as a classroom testing to obtain the students’ feedback.
Resumo:
Scientific applications rely heavily on floating point data types. Floating point operations are complex and require complicated hardware that is both area and power intensive. The emergence of massively parallel architectures like Rigel creates new challenges and poses new questions with respect to floating point support. The massively parallel aspect of Rigel places great emphasis on area efficient, low power designs. At the same time, Rigel is a general purpose accelerator and must provide high performance for a wide class of applications. This thesis presents an analysis of various floating point unit (FPU) components with respect to Rigel, and attempts to present a candidate design of an FPU that balances performance, area, and power and is suitable for massively parallel architectures like Rigel.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
Resumo:
The transistor laser is a unique three-port device that operates simultaneously as a transistor and a laser. With quantum wells incorporated in the base regions of heterojunction bipolar transistors, the transistor laser possesses advantageous characteristics of fast base spontaneous carrier lifetime, high differential optical gain, and electrical-optical characteristics for direct “read-out” of its optical properties. These devices have demonstrated many useful features such as high-speed optical transmission without the limitations of resonance, non-linear mixing, frequency multiplication, negative resistance, and photon-assisted switching. To date, all of these devices operate as multi-mode lasers without any type of wavelength selection or stabilizing mechanisms. Stable single-mode distributed feedback diode laser sources are important in many applications including spectroscopy, as pump sources for amplifiers and solid-state lasers, for use in coherent communication systems, and now as TLs potentially for integrated optoelectronics. The subject of this work is to expand the future applications of the transistor laser by demonstrating the theoretical background, process development and device design necessary to achieve singlelongitudinal- mode operation in a three-port transistor laser. A third-order distributed feedback surface grating is fabricated in the top emitter AlGaAs confining layers using soft photocurable nanoimprint lithography. The device produces continuous wave laser operation with a peak wavelength of 959.75 nm and threshold current of 13 mA operating at -70 °C. For devices with cleaved ends a side-mode suppression ratio greater than 25 dB has been achieved.
Documentation control process of brazilian multipurpose reactor - conceptual design and basic design
Resumo:
This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.
Resumo:
High-ranking Chinese military officials are often quoted in international media as stating that China cannot afford to lose even an inch of Chinese territory, as this territory has been passed down from Chinese ancestors. Such statements are not new in Chinese politics, but recently this narrative has made an important transition. While previously limited to disputes over land borders, such rhetoric is now routinely applied to disputes involving islands and maritime borders. China is increasingly oriented toward its maritime borders and seems unwilling to compromise on delimitation disputes, a transition mirrored by many states across the globe. In a similar vein, scholarship has found that territorial disputes are particularly intractable and volatile when compared with other types of disputes, and a large body of research has grappled with producing systematic knowledge of territorial conflict. Yet in this wide body of literature, an important question has remained largely unanswered - how do states determine which geographical areas will be included in their territorial and maritime claims? In other words, if nations are willing to fight and die for an inch of national territory, how do governments draw the boundaries of the nation? This dissertation uses in-depth case studies of some of the most prominent territorial and maritime disputes in East Asia to argue that domestic political processes play a dominant and previously under-explored role in both shaping claims and determining the nature of territorial and maritime disputes. China and Taiwan are particularly well suited for this type of investigation, as they are separate claimants in multiple disputes, yet they both draw upon the same historical record when establishing and justifying their claims. Leveraging fieldwork in Taiwan, China, and the US, this dissertation includes in-depth case studies of China’s and Taiwan’s respective claims in both the South China Sea and East China Sea disputes. Evidence from this dissertation indicates that officials in both China and Taiwan have struggled with how to reconcile history and international law when establishing their claims, and that this struggle has introduced ambiguity into China's and Taiwan's claims. Amid this process, domestic political dynamics have played a dominant role in shaping the options available and the potential for claims to change in the future. In Taiwan’s democratic system, where national identity is highly contested through party politics, opinions vary along a broad spectrum as to the proper borders of the nation, and there is considerable evidence that Taiwan’s claims may change in the near future. In contrast, within China’s single-party authoritarian political system, where nationalism is source of regime legitimacy, views on the proper interpretation of China’s boundaries do vary, but along a much more narrow range. In the dissertation’s final chapter, additional cases, such as South Korea’s position on Dokdo and Indonesia’s approach to the defense of Natuna are used as points of comparison to further clarify theoretical findings.
Resumo:
International audience
Resumo:
Today , Providing drinking water and process water is one of the major problems in most countries ; the surface water often need to be treated to achieve necessary quality, and in this way, technological and also financial difficulties cause great restrictions in operating the treatment units. Although water supply by simple and cheap systems has been one of the important objectives in most scientific and research centers in the world, still a great percent of population in developing countries, especially in rural areas, don't benefit well quality water. One of the big and available sources for providing acceptable water is sea water. There are two ways to treat sea water first evaporation and second reverse osmosis system. Nowadays R.O system has been used for desalination because of low budget price and easily to operate and maintenance. The sea water should be pretreated before R.O plants, because there is some difficulties in raw sea water that can decrease yield point of membranes in R.O system. The subject of this research may be useful in this way, and we hope to be able to achieve complete success in design and construction of useful pretreatment systems for R.O plant. One of the most important units in the sea water pretreatment plant is filtration, the conventional method for filtration is pressurized sand filters, and the subject of this research is about new filtration which is called continuous back wash sand filtration (CBWSF). The CBWSF designed and tested in this research may be used more economically with less difficulty. It consists two main parts first shell body and second central part comprising of airlift pump, raw water feeding pipe, air supply hose, backwash chamber and sand washer as well as inlet and outlet connections. The CBWSF is a continuously operating filter, i.e. the filter does not have to be taken out of operation for backwashing or cleaning. Inlet water is fed through the sand bed while the sand bed is moving downwards. The water gets filtered while the sand becomes dirty. Simultaneously, the dirty sand is cleaned in the sand washer and the suspended solids are discharged in backwash water. We analyze the behavior of CBWSF in pretreatment of sea water instead of pressurized sand filter. There is one important factor which is not suitable for R.O membranes, it is bio-fouling. This factor is defined by Silt Density Index (SDI).measured by SDI. In this research has been focused on decreasing of SDI and NTU. Based on this goal, the prototype of pretreatment had been designed and manufactured to test. The system design was done mainly by using the design fundamentals of CBWSF. The automatic backwash sand filter can be used in small and also big water supply schemes. In big water treatment plants, the units of filters perform the filtration and backwash stages separately, and in small treatment plants, the unit is usually compacted to achieve less energy consumption. The analysis of the system showed that it may be used feasibly for water treating, especially for limited population. The construction is rapid, simple and economic, and its performance is high enough because no mobile mechanical part is used in it, so it may be proposed as an effective method to improve the water quality and consequently the hygiene level in the remote places of the country.
Resumo:
INTRODUCTION: In common with much of the developed world, Scotland has a severe and well established problem with overweight and obesity in childhood with recent figures demonstrating that 31% of Scottish children aged 2-15 years old were overweight including obese in 2014. This problem is more pronounced in socioeconomically disadvantaged groups and in older children across all economic groups (Scottish Health Survey, 2014). Children who are overweight or obese are at increased risk of a number of adverse health outcomes in the short term and throughout their life course (Lobstein and Jackson-Leach, 2006). The Scottish Government tasked all Scottish Health Boards with developing and delivering child healthy weight interventions to clinically overweight or obese children in an attempt to address this health problem. It is therefore imperative to deliver high quality, affordable, appropriately targeted interventions which can make a sustained impact on children’s lifestyles, setting them up for life as healthy weight adults. This research aimed to inform the design, readiness for application and Health Board suitability of an effective primary school-based curricular child healthy weight intervention. METHODS: the process involved in conceptualising a child healthy weight intervention, developing the intervention, planning for implementation and subsequent evaluation was guided by the PRECEDE-PROCEED Model (Green and Kreuter, 2005) and the Intervention Mapping protocol (Lloyd et al. 2011). RESULTS: The outputs from each stage of the development process were used to formulate a child healthy weight intervention conceptual model then develop plans for delivery and evaluation. DISCUSSION: The Fit for School conceptual model developed through this process has the potential to theoretically modify energy balance related behaviours associated with unhealthy weight gain in childhood. It also has the potential to be delivered at a Health Board scale within current organisational restrictions.