963 resultados para tense and aspect
Resumo:
Subjects read and recalled a series of five short stories in one of four plot and style combinations. The stories were written in one of two styles that consisted of opposing clause orders (i.e., independent-dependent vs. dependent-independent), tense forms (i.e., past vs. present), and descriptor forms (modifier modifier vs. modifier as a noun). The subjects incorporated both plot and style characteristics into their recalls. Other subjects, who, after five recalls, either generated a new story or listed the rules that had been followed by the stories read, included the marked forms of the characteristics they learned more often, except for tense. The subjects read and recalled four stories of the same plot and style and then read and recalled a fifth story of the same plot and style or of one of the other three plot/style combinations. Ability to switch style depended on both the characteristic and the markedness.
Resumo:
In the United States, poverty has been historically higher and disproportionately concentrated in the American South. Despite this fact, much of the conventional poverty literature in the United States has focused on urban poverty in cities, particularly in the Northeast and Midwest. Relatively less American poverty research has focused on the enduring economic distress in the South, which Wimberley (2008:899) calls “a neglected regional crisis of historic and contemporary urgency.” Accordingly, this dissertation contributes to the inequality literature by focusing much needed attention on poverty in the South.
Each empirical chapter focuses on a different aspect of poverty in the South. Chapter 2 examines why poverty is higher in the South relative to the Non-South. Chapter 3 focuses on poverty predictors within the South and whether there are differences in the sub-regions of the Deep South and Peripheral South. These two chapters compare the roles of family demography, economic structure, racial/ethnic composition and heterogeneity, and power resources in shaping poverty. Chapter 4 examines whether poverty in the South has been shaped by historical racial regimes.
The Luxembourg Income Study (LIS) United States datasets (2000, 2004, 2007, 2010, and 2013) (derived from the U.S. Census Current Population Survey (CPS) Annual Social and Economic Supplement) provide all the individual-level data for this study. The LIS sample of 745,135 individuals is nested in rich economic, political, and racial state-level data compiled from multiple sources (e.g. U.S. Census Bureau, U.S. Department of Agriculture, University of Kentucky Center for Poverty Research, etc.). Analyses involve a combination of techniques including linear probability regression models to predict poverty and binary decomposition of poverty differences.
Chapter 2 results suggest that power resources, followed by economic structure, are most important in explaining the higher poverty in the South. This underscores the salience of political and economic contexts in shaping poverty across place. Chapter 3 results indicate that individual-level economic factors are the largest predictors of poverty within the South, and even more so in the Deep South. Moreover, divergent results between the South, Deep South, and Peripheral South illustrate how the impact of poverty predictors can vary in different contexts. Chapter 4 results show significant bivariate associations between historical race regimes and poverty among Southern states, although regression models fail to yield significant effects. Conversely, historical race regimes do have a small, but significant effect in explaining the Black-White poverty gap. Results also suggest that employment and education are key to understanding poverty among Blacks and the Black-White poverty gap. Collectively, these chapters underscore why place is so important for understanding poverty and inequality. They also illustrate the salience of micro and macro characteristics of place for helping create, maintain, and reproduce systems of inequality across place.
Resumo:
Two-dimensional (2D) hopper flow of disks has been extensively studied. Here, we investigate hopper flow of ellipses with aspect ratio $\alpha = 2$, and we contrast that behavior to the flow of disks. We use a quasi-2D hopper containing photoelastic particles to obtain stress/force information. We simultaneously measure the particle motion and stress. We determine several properties, including discharge rates, jamming probabilities, and the number of particles in clogging arches. For both particle types, the size of the opening, $D$, relative to the size of particles, $\ell$ is an important dimensionless measure. The orientation of the ellipses plays an important role in flow rheology and clogging. The alignment of contacting ellipses enhances the probability of forming stable arches. This study offers insight for applications involving the flow of granular materials consisting of ellipsoidal shapes, and possibly other non-spherical shapes.
Resumo:
The study of a score by a serious performer is a fundamental step in the process of arriving at a knowledgeable and deeply informed approach to performing a piece of music. In order to obtain this knowledge numerous aspects of the score must be taken into consideration. It is the intent of this dissertation to gather and analyze the information concerning Naturale, a work written by Luciano Berio in 1985 for viola, percussion and recorded voice, based on Sicilian folk songs. All the aspects surrounding Naturale’s existence are taken into consideration in this study. First, it is important to reflect on Berio’s compositional style and traits, the manner in which he relates his works one to another, what he sees in folk music and his own personal desire to intertwine art music and folk music. For Berio Naturale is not an isolated venture into the realm of mixing folk music and his own avant-garde style; it is instead one of many works resulting from his long-standing relationship with folk music. Another essential aspect in this case is the study of Sicilian folk music itself, and the sources used by Berio to find the songs by which he was inspired. The work is examined section by section with figures showing both excerpts of Naturale as well as the original songs with their translations. An analysis containing harmonic, thematic and formal aspects of the score was developed in order to arrive at a better understanding of the structure and pacing of the piece. For this research the author went to Italy to conduct an interview with Maestro Aldo Bennici, the Sicilian violist for whom Naturale was composed. This interview helped in the discovery of two more songs used by Berio that have not to this point been identified in any other document. Bennici’s outstanding testimony portrayed the expressive character of this music and the evocative imagery behind this score. I hope to bring this knowledge to other performers, that they may fully understand and appreciate the unique beauty and power of Berio’s Naturale.
Resumo:
An MHD flow is considered which is relevant to horizontal Bridgman technique for crystal growth from a melt. In the unidirectional parallel flow approximation an analytical solution is found accounting for the finite rectangular cross section of the channel in the case of a vertical magnetic field. Numerical pseudo-spectral solutions are used in the cases of arbitrary magnetic field and gravity vector orientations. The vertical magnetic field (parallel to the gravity) is found to be he most effective to damp the flow, however, complicated flow profiles with "overvelocities" in the comers are typical in the case of a finite cross-section channel. The temperature distribution is shown to be dependent on the flow profile. The linear stability of the flow is investigated by use of the Chebyshev pseudospectral method. For the case of an infinite width channel the transversal rolls instability is investigated, and for the finite cross-section channel the longitudinal rolls instability is considered. The critical Gr number values are computed in the dependence of the Ha number and the wave number or the aspect ratio in the case of finite section.
Resumo:
Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut-edge weight, however it has been shown that for certain classes of solution algorithm, the convergence of the solver is strongly influenced by the subdomain aspect ratio. In this paper therefore, we modify the multilevel algorithms in order to optimise a cost function based on aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.
Resumo:
We present a dynamic distributed load balancing algorithm for parallel, adaptive Finite Element simulations in which we use preconditioned Conjugate Gradient solvers based on domain-decomposition. The load balancing is designed to maintain good partition aspect ratio and we show that cut size is not always the appropriate measure in load balancing. Furthermore, we attempt to answer the question why the aspect ratio of partitions plays an important role for certain solvers. We define and rate different kinds of aspect ratio and present a new center-based partitioning method of calculating the initial distribution which implicitly optimizes this measure. During the adaptive simulation, the load balancer calculates a balancing flow using different versions of the diffusion algorithm and a variant of breadth first search. Elements to be migrated are chosen according to a cost function aiming at the optimization of subdomain shapes. Experimental results for Bramble's preconditioner and comparisons to state-of-the-art load balancers show the benefits of the construction.
Resumo:
The deployment of OECBs (opto-electrical circuit boards) is expected to make a significant impact in the telecomm switches arena within the next five years. This will create optical backplanes with high speed point-to-point optical interconnects. The crucial aspect in the manufacturing process of the optical backplane is the successful coupling between VCSEL (vertical cavity surface emitting laser) device and embedded waveguide in the OECB. The results from a thermo-mechanical analysis are being used in a purely optical model, which solves optical energy and attenuation from the VCSEL aperture into, and then through, the waveguide. Results from the modelling are being investigated using DOE analysis to identify packaging parameters that minimise misalignment. This is achieved via a specialist optimisation software package. Results from the thermomechanical and optical models are discussed as are experimental results from the DOE.
Resumo:
Today most of the IC and board designs are undertaken using two-dimensional graphics tools and rule checks. System-in-package is driving three-dimensional design concepts and this is posing a number of challenges for electronic design automation (EDA) software vendors. System-in-package requires three-dimensional EDA tools and design collaboration systems with appropriate manufacturing and assembly rules for these expanding technologies. Simulation and Analysis tools today focus on one aspect of the design requirement, for example, thermal, electrical or mechanical. System-in-Package requires analysis and simulation tools that can easily capture the complex three dimensional structures and provided integrated fast solutions to issues such as thermal management, reliability, electromagnetic interference, etc. This paper discusses some of the challenges faced by the design and analysis community in providing appropriate tools to engineers for System-in-Package design
Resumo:
Accurate representation of the coupled effects between turbulent fluid flow with a free surface, heat transfer, solidification, and mold deformation has been shown to be necessary for the realistic prediction of several defects in castings and also for determining the final crystalline structure. A core component of the computational modeling of casting processes involves mold filling, which is the most computationally intensive aspect of casting simulation at the continuum level. Considering the complex geometries involved in shape casting, the evolution of the free surface, gas entrapment, and the entrainment of oxide layers into the casting make this a very challenging task in every respect. Despite well over 30 years of effort in developing algorithms, this is by no means a closed subject. In this article, we will review the full range of computational methods used, from unstructured finite-element (FE) and finite-volume (FV) methods through fully structured and block-structured approaches utilizing the cut-cell family of techniques to capture the geometric complexity inherent in shape casting. This discussion will include the challenges of generating rapid solutions on high-performance parallel cluster technology and how mold filling links in with the full spectrum of physics involved in shape casting. Finally, some indications as to novel techniques emerging now that can address genuinely arbitrarily complex geometries are briefly outlined and their advantages and disadvantages are discussed.
Resumo:
This paper presents a numerical study of the Reynolds number and scaling effects in microchannel flows. The configuration includes a rectangular, high-aspect ratio microchannel with heat sinks, similar to an experimental setup. Water at ambient temperature is used as a coolant fluid and the source of heating is introduced via electronic cartridges in the solids. Two channel heights, measuring 0.3 mm and 1 mm are considered at first. The Reynolds number varies in a range of 500-2200, based on the hydraulic diameter. Simulations are focused on the Reynolds number and channel height effects on the Nusselt number. It is found that the Reynolds number has noticeable influences on the local Nusselt number distributions, which are in agreement with other studies. The numerical predictions of the dimensionless temperature of the fluid agree fairly well with experimental measurements; however the dimensionless temperature of the solid does exhibit a significant discrepancy near the channel exit, similar to those reported by other researchers. The present study demonstrates that there is a significant scaling effect at small channel height, typically 0.3 mm, in agreement with experimental observations. This scaling effect has been confirmed by three additional simulations being carried out at channel heights of 0.24 mm, 0.14 mm and 0.1 mm, respectively. A correlation between the channel height and the normalized Nusselt number is thus proposed, which agrees well with results presented.
Resumo:
There are mainly two known approaches to the representation of temporal information in Computer Science: modal logic approaches (including tense logics and hybrid temporal logics) and predicate logic approaches (including temporal argument methods and reified temporal logics). On one hand, while tense logics, hybrid temporal logics and temporal argument methods enjoy formal theoretical foundations, their expressiveness has been criticised as not power enough for representing general temporal knowledge; on the other hand, although current reified temporal logics provide greater expressive power, most of them lack of complete and sound axiomatic theories. In this paper, we propose a new reified temporal logic with a clear syntax and semantics in terms of a sound and complete axiomatic formalism which retains all the expressive power of the approach of temporal reification.
Resumo:
This paper presents preliminary studies in electroplating using megasonic agitation to avoid the formation of voids within high aspect ratio microvias that are used for the redistribution of interconnects in high density interconnection technology in printed circuit boards. Through this technique, uniform deposition of metal on the side walls of the vias is possible. High frequency acoustic streaming at megasonic frequencies enables the decrease of the Nernst diffusion layer down to the sub-micron range, allowing thereby conformal electrodeposition in deep grooves. This effect enables the normally convection free liquid near the surface to be agitated. Higher throughput and better control of the material properties of the deposits can be achieved for the manufacturing of embedded interconnections and metal-based MEMS. For optimal filling performance of the microvias, a full design of experiments (DOE) and a multi-physics numerical simulation have been conducted to analyse the influence of megasonic agitation on the plating quality of the microvias. Megasonic based deposition has been found to increase the deposition rate as well as improving the quality of the metal deposits.
Resumo:
The creation of my hypermedia work Index of Love, which narrates a love story as an archive of moments, images and objects recollected, also articulated for me the potential of the book as electronic text. The book has always existed as both narrative and archive. Tables of contents and indexes allow the book to function simultaneously as linear narrative and non-linear, searchable database. The book therefore has more in common with the so-called 'new media' of the 21st century than it does with the dominant 20th century media of film, video and audiotape, whose logic and mode of distribution are resolutely linear. My thesis is that the non-linear logic of new media brings to the fore an aspect of the book - the index - whose potential for the production of narrative is only just beginning to be explored. When a reader/user accesses an electronic work, such as a website, via its menu, they simultaneously experience it as narrative and archive. The narrative journey taken is created through the menu choices made. Within the electronic book, therefore, the index (or menu) has the potential to function as more than just an analytical or navigational tool. It has the potential to become a creative, structuring device. This opens up new possibilities for the book, particularly as, in its paper based form, the book indexes factual work, but not fiction. In the electronic book, however, the index offers as rich a potential for fictional narratives as it does for factual volumes. [ABSTRACT FROM AUTHOR]
Resumo:
Generally speaking, the term temporal logic refers to any system of rules and symbolism for representing and reasoning about propositions qualified in terms of time. In computer science, particularly in the domain of Artificial Intelligence, there are mainly two known approaches to the representation of temporal information: modal logic approaches including tense logic and hybrid temporal logic, and predicate logic approaches including temporal arguement method and reified temporal logic. On one hand, while tense logic, hybrid temporal logic and temporal argument method enjoy formal theoretical foundations, their expressiveness has been criticised as not power enough for representing general temporal knowledge; on the other hand, although reified temporal logic provides greater expressive power, most of the current systems following the temporal reification lack of complete and sound axiomatic theories. With there observations in mind, a new reified temporal logic with clear syntax and semantics in terms of a sound and complete axiomatic formalism is introduced in this paper, which retains all the expressive power of temporal reification.