905 resultados para Running Kinematics
Resumo:
Congestion of traffic is one of the biggest challenges for urban cities in global perspective. Car traffic and traffic jams are causing major problems and the congestion is predicted to worsen in the future. The greenhouse effect has caused a severe threat to the environment globally. On the other hand from the point of view of companies and other economic parties time and money has been lost because of the congestion of traffic. This work studies some possible traffic payment systems for the Helsinki Metropolitan area introducing three optional models and concentrating on the point of view of the economic parties. Central part of this work is formed by a research questionnaire, which was conducted among companies located in the Helsinki area and where more than 1000 responses were gained. The study researches the approaches of the respondents to the area s current traffic system, its development and urban congestion pricing and the answers are analyzed according to the size, industry and location of the companies. The economic aspect is studied by economic theory of industrial location and by emphasizing the meaning of smoothly running traffic for the economic world. Chapter three presents detailed information about traffic congestion, how today s car-centered society has been formed, what concrete things congestion means for economic life and how traffic congestion can be limited. Theoretically it is examined how urban traffic payment systems are working using examples from London and Stockholm where successful traffic payment experiences exist. The literature review analyzes urban development, increasing car traffic and Helsinki Metropolitan area on a structural point of view. The fourth chapter introduces a case study, which concentrates on Helsinki Metropolitan area s different structures, the congestion situation in Helsinki and the introduction of the traffic payment system clarification. Currently the region is experiencing a phase where big changes are happening in the planning of traffic. The traffic systems are being unified to consider the whole region in the future. Also different advices for the increasing traffic congestion problems are needed. Chapter five concentrates on the questionnaire and theme interviews and introduces the research findings. The respondents overall opinion of the traffic payments is quite skeptical. There were some regional differences found and especially taxi, bus and cargo and transit enterprises shared the most negative opinion. Economic parties were worried especially because of the traffic congestion is causing harm for the business travel and the employees traveling to and from work. According to the respondents the best option from the traffic payment models was the ring model where the payment places would be situated inside the Ring Road III. Both the company representatives and other key decision makers see public transportation as a good and powerful tool to decrease traffic congestion. The only question, which remains, is where to find investors willing to invest in public transportation if economic representatives do not believe in pricing the traffic by for example traffic payment systems.
Resumo:
Many process-control systems are air-operated. In such an environment, it would be desirable and economical to use pneumatic sensors. Bubble-back pressure sensors perform quite satisfactorily, but in case of viscous inflammable and slurry-like liquids with a tendency to froth, this level sensor is inadequate. The method suggested in this paper utilizes a pneumatic capacitor, one boundary of which is formed by the liquid level, to modulate a fluid amplifier feedback oscillator. The absence of moving parts and economy obtained makes this method attractive for process-control applications. The system has been mathematically modeled and simulated on an IBM 360/44 digital computer. Experimental values compare fairly well with the theoretical results. For the range tested, the sensor is found to have a linear frequency variation with the liquid level Extended running in the laboratory shows that the system is very reliable. This system has been found insensitive to temperature variations of up to 15ðC.
Resumo:
We present a distributed algorithm that finds a maximal edge packing in O(Δ + log* W) synchronous communication rounds in a weighted graph, independent of the number of nodes in the network; here Δ is the maximum degree of the graph and W is the maximum weight. As a direct application, we have a distributed 2-approximation algorithm for minimum-weight vertex cover, with the same running time. We also show how to find an f-approximation of minimum-weight set cover in O(f2k2 + fk log* W) rounds; here k is the maximum size of a subset in the set cover instance, f is the maximum frequency of an element, and W is the maximum weight of a subset. The algorithms are deterministic, and they can be applied in anonymous networks.
Resumo:
This thesis in the field of translation studies focusses on the role of norms in the work of a literary translator. Norms are seen as guidelines for the translator; they also reflect the way the target audience either accepts or rejects the translation. Thus they are of an intersubjective nature. The theoretical background of the study is based on the views on norms of Andrew Chesterman and Gideon Toury. The writer makes use of her own editing and publishing experience, as well as her experience in translating Lewis Carroll, considering these in respect to theoretical views of norms and vice versa. The aim is also to bring to light some of the tacit knowledge related to translating, editing and publishing in Finland. The study has three angles. First, the writer introduces the norms of Finnish literary translation as gathered from her own working experience. The sources from which these norms arise and which affect them are briefly described. Six central translation norms emerge; they are described and exemplified through cases of Carroll translation. Secondly, a small-scale questionnaire study is presented. This was carried out in order to sound the normative views of other translators and to limit the role of subjectivity. The views of the informants largely support the set of norms presented by the writer, although the norms of quotability and harmony between translation and illustration do not arise. Instead, the answers give rise to a seventh, ethical norm, which is labelled the norm of integrity. Thirdly, there is a brief presentation of Lewis Carroll, his Alice books and their translation history in Finland. The retranslation hypothesis and the motives of retranslation are considered in the light of the work of Outi Paloposki and Kaisa Koskinen. The final part of the thesis plunges into actual translation work. It includes one and a half chapters of Through the Looking-Glass (Alicen seikkailut peilintakamaassa) as translated into Finnish by the writer. The translation commentary first discusses a number of recurring and general translation points; the running commentary then analyses 77 individual translation solutions and their justifications. The writer uses introspection as a way of reflecting on her own translation process, its decisive points and the role of norms therein. Keywords: Translation studies, Carroll, norms
Resumo:
Cast aluminium alloy mica particle composites of varying mica content were tested in tension, compression, and impact. With 2.2 percent mica (size range 40µm – 120µm) the tensile and compression strengths of aluminium alloy decreased by 56 and 22 percent, respectively. The corresponding decreases in percent elongation and percent reduction are 49 and 39 percent. Previous work [2] shows that despite this decrease in strength the composite with 2.5 percent mica and having an UTS of 15 kg/mm2 and compression strength of 28 kg/mm2 performs well as a bearing material under severe running conditions. The differences in strength characteristics of cast aluminium-mica particle composites between tension and compression suggests that, as in cast iron, expansion of voids at the matrix particle interface may be the guiding mechanism of the deformation. SEM studies show that on the tensile fractured specimen surface, there are large voids at the particle matrix interface.
Resumo:
A 4 A electron-density map of Pf1 filamentous bacterial virus has been calculated from x-ray fiber diffraction data by using the maximum-entropy method. This method produces a map that is free of features due to noise in the data and enables incomplete isomorphous-derivative phase information to be supplemented by information about the nature of the solution. The map shows gently curved (banana-shaped) rods of density about 70 A long, oriented roughly parallel to the virion axis but slewing by about 1/6th turn while running from a radius of 28 A to one of 13 A. Within these rods, there is a helical periodicity with a pitch of 5 to 6 A. We interpret these rods to be the helical subunits of the virion. The position of strongly diffracted intensity on the x-ray fiber pattern shows that the basic helix of the virion is right handed and that neighboring nearly parallel protein helices cross one another in an unusual negative sense.
Resumo:
Electronic, magnetic, and structural properties of graphene flakes depend sensitively upon the type of edge atoms. We present a simple software tool for determining the type of edge atoms in a honeycomb lattice. The algorithm is based on nearest neighbor counting. Whether an edge atom is of armchair or zigzag type is decided by the unique pattern of its nearest neighbors. Particular attention is paid to the practical aspects of using the tool, as additional features such as extracting out the edges from the lattice could help in analyzing images from transmission microscopy or other experimental probes. Ultimately, the tool in combination with density-functional theory or tight-binding method can also be helpful in correlating the properties of graphene flakes with the different armchair-to-zigzag ratios. Program summary Program title: edgecount Catalogue identifier: AEIA_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEIA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 66685 No. of bytes in distributed program, including test data, etc.: 485 381 Distribution format: tar.gz Programming language: FORTRAN 90/95 Computer: Most UNIX-based platforms Operating system: Linux, Mac OS Classification: 16.1, 7.8 Nature of problem: Detection and classification of edge atoms in a finite patch of honeycomb lattice. Solution method: Build nearest neighbor (NN) list; assign types to edge atoms on the basis of their NN pattern. Running time: Typically similar to second(s) for all examples. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The performance of a program will ultimately be limited by its serial (scalar) portion, as pointed out by Amdahl′s Law. Reported studies thus far of instruction-level parallelism have mixed data-parallel program portions with scalar program portions, often leading to contradictory and controversial results. We report an instruction-level behavioral characterization of scalar code containing minimal data-parallelism, extracted from highly vectorized programs of the PERFECT benchmark suite running on a Cray Y-MP system. We classify scalar basic blocks according to their instruction mix, characterize the data dependencies seen in each class, and, as a first step, measure the maximum intrablock instruction-level parallelism available. We observe skewed rather than balanced instruction distributions in scalar code and in individual basic block classes of scalar code; nonuniform distribution of parallelism across instruction classes; and, as expected, limited available intrablock parallelism. We identify frequently occurring data-dependence patterns and discuss new instructions to reduce latency. Toward effective scalar hardware, we study latency-pipelining trade-offs and restricted multiple instruction issue mechanisms.
Resumo:
We consider the problem of computing an approximate minimum cycle basis of an undirected non-negative edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. Although in most such applications any cycle basis can be used, a low weight cycle basis often translates to better performance and/or numerical stability. Despite the fact that the problem can be solved exactly in polynomial time, we design approximation algorithms since the performance of the exact algorithms may be too expensive for some practical applications. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time O(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time O(n(3+2/k) ), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega) ) bound. We also present a 2-approximation algorithm with expected running time O(M-omega root n log n), a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.
Resumo:
The worldwide research in nanoelectronics is motivated by the fact that scaling of MOSFETs by conventional top down approach will not continue for ever due to fundamental limits imposed by physics even if it is delayed for some more years. The research community in this domain has largely become multidisciplinary trying to discover novel transistor structures built with novel materials so that semiconductor industry can continue to follow its projected roadmap. However, setting up and running a nanoelectronics facility for research is hugely expensive. Therefore it is a common model to setup a central networked facility that can be shared with large number of users across the research community. The Centres for Excellence in Nanoelectronics (CEN) at Indian Institute of Science, Bangalore (IISc) and Indian Institute of Technology, Bombay (IITB) are such central networked facilities setup with funding of about USD 20 million from the Department of Information Technology (DIT), Ministry of Communications and Information Technology (MCIT), Government of India, in 2005. Indian Nanoelectronics Users Program (INUP) is a missionary program not only to spread awareness and provide training in nanoelectronics but also to provide easy access to the latest facilities at CEN in IISc and at IITB for the wider nanoelectronics research community in India. This program, also funded by MCIT, aims to train researchers by conducting workshops, hands-on training programs, and providing access to CEN facilities. This is a unique program aiming to expedite nanoelectronics research in the country, as the funding for projects required for projects proposed by researchers from around India has prior financial approval from the government and requires only technical approval by the IISc/ IITB team. This paper discusses the objectives of INUP, gives brief descriptions of CEN facilities, the training programs conducted by INUP and list various research activities currently under way in the program.
Resumo:
A performance prediction model generally applicable for volute-type centrifugal pumps has been extended to predict the dynamic characteristics of a pump during its normal starting and stopping periods. Experiments have been conducted on a volute pump with different valve openings to study the dynamic behaviour of the pump during normal start-up and stopping, when a small length of discharge pipeline is connected to the discharge flange of the pump. Such experiments have also been conducted when the test pump was part of a hydraulic system, an experimental rig, where it is pumping against three similar pumps, known as supply pumps, connected in series, with the supply pumps kept idle or running. Instantaneous rotational speed, flowrate, and delivery and suction pressures of the pump were recorded and it was observed in all the tested cases that the change of pump behaviour during the transient period was quasi-steady, which validates the quasi-steady approach presented in this paper. The nature of variation of parameters during the transients has been discussed. The model-predicted dynamic head-capacity curves agree well with the experimental data for almost all the tested cases.
Resumo:
The relative quantum yields, phi*, for the production of I*(P-2(1/2)) at 266, 280, and similar to 305 nm are reported for a series of primary alkyl iodides using the technique of two-photon laser-induced fluorescence for the detection of I(P-2(3/2)) and I*(P-2(1/2)) atoms. Results are analyzed by invoking the impulsive energy disposal model, which summarizes the dynamics of dissociation as a single parameter. Comparison of our data with those calculated by a more sophisticated time-dependent quantum mechanical model is also made. Near the red edge of the alkyl iodide A band, absorption contribution from the (3)Q(1) state is important and the dynamics near the (3)Q(0)-(1)Q(1) curve-crossing region seem to be influenced by the kinematics of the dissociation process
Resumo:
We present a frontier based algorithm for searching multiple goals in a fully unknown environment, with only information about the regions where the goals are most likely to be located. Our algorithm chooses an ``active goal'' from the ``active goal list'' generated by running a Traveling Salesman Problem (Tsp) routine with the given centroid locations of the goal regions. We use the concept of ``goal switching'' which helps not only in reaching more number of goals in given time, but also prevents unnecessary search around the goals that are not accessible (surrounded by walls). The simulation study shows that our algorithm outperforms Multi-Heuristic LRTA* (MELRTA*) which is a significant representative of multiple goal search approaches in an unknown environment, especially in environments with wall like obstacles.
Resumo:
The authors present the simulation of the tropical Pacific surface wind variability by a low-resolution (R15 horizontal resolution and 18 vertical levels) version of the Center for Ocean-Land-Atmosphere Interactions, Maryland, general circulation model (GCM) when forced by observed global sea surface temperature. The authors have examined the monthly mean surface winds acid precipitation simulated by the model that was integrated from January 1979 to March 1992. Analyses of the climatological annual cycle and interannual variability over the Pacific are presented. The annual means of the simulated zonal and meridional winds agree well with observations. The only appreciable difference is in the region of strong trade winds where the simulated zonal winds are about 15%-20% weaker than observed, The amplitude of the annual harmonics are weaker than observed over the intertropical convergence zone and the South Pacific convergence zone regions. The amplitudes of the interannual variation of the simulated zonal and meridional winds are close to those of the observed variation. The first few dominant empirical orthogonal functions (EOF) of the simulated, as well as the observed, monthly mean winds are found to contain a targe amount of high-frequency intraseasonal variations, While the statistical properties of the high-frequency modes, such as their amplitude and geographical locations, agree with observations, their detailed time evolution does not. When the data are subjected to a 5-month running-mean filter, the first two dominant EOFs of the simulated winds representing the low-frequency EI Nino-Southern Oscillation fluctuations compare quite well with observations. However, the location of the center of the westerly anomalies associated with the warm episodes is simulated about 15 degrees west of the observed locations. The model simulates well the progress of the westerly anomalies toward the eastern Pacific during the evolution of a warm event. The simulated equatorial wind anomalies are comparable in magnitude to the observed anomalies. An intercomparison of the simulation of the interannual variability by a few other GCMs with comparable resolution is also presented. The success in simulation of the large-scale low-frequency part of the tropical surface winds by the atmospheric GCM seems to be related to the model's ability to simulate the large-scale low-frequency part of the precipitation. Good correspondence between the simulated precipitation and the highly reflective cloud anomalies is seen in the first two EOFs of the 5-month running means. Moreover, the strong correlation found between the simulated precipitation and the simulated winds in the first two principal components indicates the primary role of model precipitation in driving the surface winds. The surface winds simulated by a linear model forced by the GCM-simulated precipitation show good resemblance to the GCM-simulated winds in the equatorial region. This result supports the recent findings that the large-scale part of the tropical surface winds is primarily linear.
Resumo:
The frequently observed lopsidedness of the distribution of stars and gas in disc galaxies is still considered as a major problem in galaxy dynamics. It is even discussed as an imprint of the formation history of discs and the evolution of baryons in dark matter haloes. Here, we analyse a selected sample of 70 galaxies from the Westerbork Hi Survey of Spiral and Irregular Galaxies. The Hi data allow us to follow the morphology and the kinematics out to very large radii. In the present paper, we present the rotation curves and study the kinematic asymmetry. We extract the rotation curves of the receding and approaching sides separately and show that the kinematic behaviour of disc galaxies can be classified into five different types: symmetric velocity fields where the rotation curves of the receding and approaching sides are almost identical; global distortions where the rotation velocities of the receding and approaching sides have an offset that is constant with radius; local distortions leading to large deviations in the inner and negligible deviations in the outer parts (and vice versa); and distortions that divide the galaxies into two kinematic systems that are visible in terms of the different behaviour of the rotation curves of the receding and approaching sides, which leads to a crossing and a change in side. The kinematic lopsidedness is measured from the maximum rotation velocities, averaged over the plateau of the rotation curves. This gives a good estimate of the global lopsidedness in the outer parts of the sample galaxies. We find that the mean value of the perturbation parameter denoting the lopsided potential as obtained from the kinematic data is 0.056. Altogether, 36% of the sample galaxies are globally lopsided, which can be interpreted as the disc responding to a halo that was distorted by a tidal encounter. In Paper II, we study the morphological lopsidedness of the same sample of galaxies.