924 resultados para Space-time block coding (STBC)
Resumo:
This thesis presents an approach for formulating and validating a space averaged drag model for coarse mesh simulations of gas-solid flows in fluidized beds using the two-fluid model. Proper modeling for fluid dynamics is central in understanding any industrial multiphase flow. The gas-solid flows in fluidized beds are heterogeneous and usually simulated with the Eulerian description of phases. Such a description requires the usage of fine meshes and small time steps for the proper prediction of its hydrodynamics. Such constraint on the mesh and time step size results in a large number of control volumes and long computational times which are unaffordable for simulations of large scale fluidized beds. If proper closure models are not included, coarse mesh simulations for fluidized beds do not give reasonable results. The coarse mesh simulation fails to resolve the mesoscale structures and results in uniform solids concentration profiles. For a circulating fluidized bed riser, such predicted profiles result in a higher drag force between the gas and solid phase and also overestimated solids mass flux at the outlet. Thus, there is a need to formulate the closure correlations which can accurately predict the hydrodynamics using coarse meshes. This thesis uses the space averaging modeling approach in the formulation of closure models for coarse mesh simulations of the gas-solid flow in fluidized beds using Geldart group B particles. In the analysis of formulating the closure correlation for space averaged drag model, the main parameters for the modeling were found to be the averaging size, solid volume fraction, and distance from the wall. The closure model for the gas-solid drag force was formulated and validated for coarse mesh simulations of the riser, which showed the verification of this modeling approach. Coarse mesh simulations using the corrected drag model resulted in lowered values of solids mass flux. Such an approach is a promising tool in the formulation of appropriate closure models which can be used in coarse mesh simulations of large scale fluidized beds.
Resumo:
Chaotic dynamical systems exhibit trajectories in their phase space that converges to a strange attractor. The strangeness of the chaotic attractor is associated with its dimension in which instance it is described by a noninteger dimension. This contribution presents an overview of the main definitions of dimension discussing their evaluation from time series employing the correlation and the generalized dimension. The investigation is applied to the nonlinear pendulum where signals are generated by numerical integration of the mathematical model, selecting a single variable of the system as a time series. In order to simulate experimental data sets, a random noise is introduced in the time series. State space reconstruction and the determination of attractor dimensions are carried out regarding periodic and chaotic signals. Results obtained from time series analyses are compared with a reference value obtained from the analysis of mathematical model, estimating noise sensitivity. This procedure allows one to identify the best techniques to be applied in the analysis of experimental data.
Resumo:
Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.
Resumo:
Irrigated rice sowing season and red rice competition are among the main factors affecting grain yield. The objective of this work was to evaluate the sowing date of irrigated rice and moments of application of the herbicide imazapyr + imazapic to control red rice management and irrigated rice grain yield. Eight experiments were performed at the following dates (09/30, 10/19, 11/08 and 12/01) for the 2010/2011 harvest season and (09/27, 10/17, 11/08 and 12/05) for the 2011/2012 harvest season. The treatments were: application of the herbicide imazapyr + imazapic at doses of 105+35 g ha-1 in pre-emergence (PRE); 52.5+17.5 g ha‑1 in pre-emergence and 52.5+17.5 g ha-1 in post-emergence (PRE + POST); and 105+35 g ha-1 in post- emergence (POST), and a control without application and no weeding. The cultivar Puitá Inta CL was used and a randomized block design with four replicates. A joint analysis of the experiments was carried out. There was less emergence of red rice and higher grain yield of the irrigated rice at the early periods (09/30/10 and 09/27/11), with 10,578 and 8,653 kg ha-1, respectively. At the end of the season (12/01/10 and 12/05/11), there was greater reduction of the red rice seed bank. Sowing at the beginning of the recommended period provided more irrigated rice grain yield. The application of imazapyr + imazapic at a dose of 52.5+17.5 g ha-1 in PRE + 52.5+17.5 g ha-1 POST, and 105+35 g ha-1 only in PRE and POST was effective in the control of red rice.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.
Resumo:
The aim of this study was to explore the clinical efficacy of a novel retrograde puncture approach to establish a preperitoneal space for laparoscopic direct inguinal hernia repair with inguinal ring suturing. Forty-two patients who underwent laparoscopic inguinal hernia repair with retrograde puncture for preperitoneal space establishment as well as inguinal ring suturing between August 2013 and March 2014 at our hospital were enrolled. Preperitoneal space was successfully established in all patients, with a mean establishment time of 6 min. Laparoscopic repairs were successful in all patients, with a mean surgical time of 26±15.1 min. Mean postoperative hospitalization duration was 3.0±0.7 days. Two patients suffered from postoperative local hematomas, which were relieved after puncturing and drainage. Four patients had short-term local pain. There were no cases of chronic pain. Patients were followed up for 6 months to 1 year, and no recurrence was observed. Our results demonstrate that preperitoneal space established by the retrograde puncture technique can be successfully used in adult laparoscopic hernioplasty to avoid intraoperative mesh fixation, and thus reduce medical costs.
Resumo:
The aim of the present study was the assessment of volatile organic compounds produced by Sporidiobolus salmonicolor (CBS 2636) using methyl and ethyl ricinoleate, ricinoleic acid and castor oil as precursors. The analysis of the volatile organic compounds was carried out using Head Space Solid Phase Micro-Extraction (HS - SPME). Factorial experimental design was used for investigating extraction conditions, verifying stirring rate (0-400 rpm), temperature (25-60 ºC), extraction time (10-30 minutes), and sample volume (2-3 mL). The identification of volatile organic compounds was carried out by Gas Chromatography with Mass Spectrum Detector (GC/MSD). The conditions that resulted in maximum extraction were: 60 ºC, 10 minutes extraction, no stirring, sample volume of 2.0 mL, and addition of saturated KCl (1:10 v/v). In the bio-production of volatile organic compounds the effect of stirring rate (120-200 rpm), temperature (23-33 ºC), pH (4.0-8.0), precursor concentration (0.02-0.1%), mannitol (0-6%), and asparagine concentration (0-0.2%) was investigated. The bio-production at 28 ºC, 160 rpm, pH 6,0 and with the addition of 0.02% ricinoleic acid to the medium yielded the highest production of VOCs, identified as 1,4-butanediol, 1,2,2-trimethylciclopropilamine, beta-ionone; 2,3-butanodione, pentanal, tetradecane, 2-isononenal, 4-octen-3-one, propanoic acid, and octadecane.
Resumo:
Various researches in the field of econophysics has shown that fluid flow have analogous phenomena in financial market behavior, the typical parallelism being delivered between energy in fluids and information on markets. However, the geometry of the manifold on which market dynamics act out their dynamics (corporate space) is not yet known. In this thesis, utilizing a Seven year time series of prices of stocks used to compute S&P500 index on the New York Stock Exchange, we have created local chart to the corporate space with the goal of finding standing waves and other soliton like patterns in the behavior of stock price deviations from the S&P500 index. By first calculating the correlation matrix of normalized stock price deviations from the S&P500 index, we have performed a local singular value decomposition over a set of four different time windows as guides to the nature of patterns that may emerge. I turns out that in almost all cases, each singular vector is essentially determined by relatively small set of companies with big positive or negative weights on that singular vector. Over particular time windows, sometimes these weights are strongly correlated with at least one industrial sector and certain sectors are more prone to fast dynamics whereas others have longer standing waves.
Resumo:
Health Innovation Village at GE is one of the new communities targeted for startup and growth-oriented companies. It has been established at the premises of a multinational conglomerate that will promote networking and growth of startup companies. The concept combines features from traditional business incubators, accelerators, and coworking spaces. This research compares Health Innovation Village to these concepts regarding its goals, target clients, source of income, organization, facilities, management, and success factors. In addition, a new incubator classification model is introduced. On the other hand, Health Innovation Village is examined from its tenants’ perspective and improvements are suggested. The work was implemented as a qualitative case study by interviewing GE staff with connections to Health Innovation Village as well as startup entrepreneurs and employees’ working there. The most evident features of Health Innovation Village correspond to those of business incubators although it is atypical as a non-profit corporate business incubator. Strong network orientation and connections to venture capitalists are common characteristics of these new types of accelerators. The design of the premises conforms to the principles of coworking spaces, but the services provided to the startup companies are considerably more versatile than the services offered by coworking spaces. The advantages of Health Innovation Village are that there are first-class premises and exceptionally good networking possibilities that other types of incubators or accelerators are not able to offer. A conglomerate can also provide multifaceted special knowledge for young firms. In addition, both GE and the startups gained considerable publicity through their cooperation, indeed a characteristic that benefits both parties. Most of the expectations of the entrepreneurs were exceeded. However, communication and the scope of cooperation remain challenges. Micro companies spend their time developing and marketing their products and acquiring financing. Therefore, communication should be as clear as possible and accessible everywhere. The startups would prefer to cooperate significantly more, but few have the time available to assume the responsibility of leadership. The entrepreneurs also expected to have more possibilities for cooperation with GE. Wider collaboration might be accomplished by curation in the same way as it is used in the well-functioning coworking spaces where curators take care of practicalities and promote cooperation. Communication issues could be alleviated if the community had its own Intranet pages where all information could be concentrated. In particular, a common calendar and a room reservation system could be useful. In addition, it could be beneficial to have a section of the Intranet open for both the GE staff and the startups so that those willing to share their knowledge and those having project offers could use it for advertising.
Resumo:
Graffiti, Memory and Contested Space: Mnemonic Initiatives Following Periods of Trauma and/or Repression in Buenos Aires, Argentina This thesis concerns the popular articulation ofmemory following periods or incidents of trauma in Argentina. I am interested in how groups lay claim to various public spaces in the city and how they convert these spaces into mnemonic battlegrounds. In considering these spaces of trauma and places of memory, I am primarily interested in how graffiti writing (stencils, spray-paint, signatures, etchings, wall-paintings, murals and installations) is used to make these spaces transmit particular memories that impugn official versions of the past. This thesis draws on literatures focused on popular/public memory. Scholars argue that memory is socially constructed and thus actively contested. Marginal initiatives such as graffiti writing challenge the memory projects of the state as well as state projects that are perceived by citizens to be 'inadequate,' 'inappropriate,' and/or as promoting the erasure of memory. Many of these initiatives are a reaction to the proreconciliation and pro-oblivion strategies of previous governments. I outline that the history of silences and impunity, and a longstanding emphasis on reconciliation at the expense of truth and justice has created an environment of vulnerable memory in Argentina. Popular memory entrepreneurs react by aggressively articulating their memories in time and in space. As a result of this intense memory work, the built landscape in Buenos Aires is dotted with mnemonic initiatives that aim to contradict or subvert officially sanctioned memories. I also suggest that memory workers in Argentina persistently and carefially use the sites of trauma as well as key public spaces to ensure official as well as popular audiences . The data for this project was collected in five spaces in Buenos Aires, the Plaza de Mayo, Plaza Congreso, La Republica Cromanon nightclub, Avellaneda Train Station and El Olimpo, a former detention centre from the military dictatorship.
Resumo:
The purpose of this qualitative study was to understand the relationships between creativity and the working artist/teacher employed by an art college. The topic emerged from my job as an instructor at The Ontario College of Art which was used as the primary data resource and provided the highest caliber of professionals to chose from. Existent data were used to facilitate the study generated by the research of Cawelti, Rappaport, and Wood (1992). The data were generated by a group of 5 faculty members from The University of Northern Iowa, recognized for their expertise in the arts (a painter, a poet, a sculptor, a novelist, and a photographer). They were asked to respond to the following statement: "In as much detail as you like, list the things that you did, thought, or felt the last time you created an artistic product. II Cawelti, Rappaport, and Wood (1992) produced three models of the creative process, each building on the previous, with the resultant third,being in my opinion, an excellent illustration (text/visual) of the creative process. Model three (Appendix D) presented a "multi-dimensional view of the creative process: time, space, observatility, and consciousnessll (p. 90). Model three utilized a visual mapping device along the bottom of the page linked to text segments above. Both the visual and the text were interrelated so that they harmonized into a comprehensive "picture." The parti'cipants of this qualitative study were asked to consider model three from their professional perspective as artist/teachers. The interpretive sciences directed the methodology. The hermeneutic circle of continuous reflection from the whole to the part and back to the whole was an important aspect of the data analyses. Four members of the Foundation Department at The Ontario College of Art were the key participants. A series of conversational interviews was the primary source of data collection, this was augmented by observation, fie,ldnotes, and follow up telephone interviews. Transcripts of interviews were returned to participants for reflection and the telephone was used to discuss any additional -points raised. Analysis consisted of coding and organizing data according to emerging themes. These themes formed the basis for the narrative stories. The text of the narrative stories were given back to each participant for further comment. Revisions were made until both the researcher and the participants felt that the stories reflected reality. The resultant whole was critiqued from the researcher's perspective. The significance of this study was discussed as it pertains to the working artist/teacher and areas in need of further study are pointed out.
Resumo:
Geochemical examination of the rock matrix and cements from core material extracted from four oil wells within southwestern Ontario suggest various stages of diagenetic alteration and preservation of the Trenton Group carbonates. The geochemical compositions of Middle Ordovician (LMC) brachiopods reflect the physicochemical water conditions of the ambient depositional environment. The sediments appear to have been altered in the presence of mixed waters during burial in a relatively open diagenetic microenvironment. Conodont CAl determination suggests that the maturation levels of the Trenton Group carbonates are low and proceeded at temperatures of about 30 - 50°C within the shallow burial environment. The Trenton Group carbonates are characterized by two distinct stages of dolomitization which proceeded at elevated temperatures. Preexisting fracture patterns, and block faulting controlled the initial dolomitization of the precursor carbonate matrix. Dolomitization progressed In the presence of warm fluids (60 75°C) with physicochemical conditions characteristic of a progressively depleted basinal water. The matrix is mostly Idiotopic-S and Idiotopic-E dolomite, with Xenotopic-A dolomite dominating the matrix where fractures occur. The second stage of dolomitization involved hydrothermal basinal fluid(s) with temperatures of about 60 - 70°C. These are the postulated source for the saddle dolomite and blocky calcite cements occurring in pore space and fractures. Rock porosity was partly occluded by Idiotopic-E type dolomite. Late stage saddle dolomite, calcite, anhydrite, pyrite, marcasite and minor sphalerite and celestite cements effectively fill any remaining porosity within specific horizons. Based on cathode luminescence, precipitation of the different diagenetic phases probably proceeded in open diagenetic systems from chemically homogeneous fluids. Ultraviolet fluorescence of 11 the matrix and cements demonstrated that hydrocarbons were present during the earliest formation of saddle dolomite. Oxygen isotope values of -7.6 to -8.5 %0 (PDB), and carbon isotope values of - 0.5 and -3.0 %0 (PDB) from the latest stage dog-tooth calcite cement suggest that meteoric water was introduced into the system during their formation. This is estimated to have occurred at temperatures of about 25 - 40°C. Specific facies associations within the Trenton Group carbonates exhibit good hydrocarbon generating potential based on organic carbon preservation (1-3.5%). Thermal maturation and Lopatin burial-history evaluations suggest that hydrocarbons were generated within the Trenton Group carbonates some time after 300 Ma . Progressively depleted vanadium trends measured from hydrocarbon samples within southwestern Ontario suggests its potential use as a hydrocarbon migration indicator on local (within an oilfield) and on regional scales.
Resumo:
Recent research on the sources of cognitive competence in infancy and early childhood has highlighted the role of social and emotional factors (for example, Lewis, 1993b). Exploring the roots of competence requires a longitudinal and multivariate approach. To deal with the resulting complexity, potentially integrative theoretical constructs are required. One logical candidate is self-regulation. Three key developmental questions were the focus of this investigation. 1) Does infant self-regulation (attentional, emotional, and social) predict preschool cognitive competence? 2) Does infant self-regulation predict preschool self-regulation? 3) Does preschool self-regulation predict concurrent preschool cognitive competence? One hundred preschoolers (46 females, 54 males; mean age = 5 years, 11 months) who had participated at 9- and/ or 12-months of age in an object permanence task were recruited to participate in this longitudinal investigation. Each subject completed four scales of the WPPSI-R and two social cognitive tasks. Parents completed questionnaires about their preschoolers' regulatory behaviours (Achenbach's Child Behavior Checklist [1991] and selected items from Eisenberg et ale [1993] and Derryberry & Rothbart [1988]). Separate behavioural coding systems were developed to capture regulatory capabilities in infancy (from the object permanence task) and preschool (from the WPPSIR Block Design). Overall, correlational and multiple regression results offered strong affirmative answers to the three key questions (R's = .30 to .38), using the behavioural observations of self-regulation. Behavioural regulation at preschool substantially predicted parental reports of regulation, but the latter variables did not predict preschool competence. Infant selfregulation and preschool regulation made statistically independent contributions to competence, even though regulation at Time 1 and Time 2 ii were substantially related. The results are interpreted as supporting a developmental pathway in which well-regulated infants more readily acquire both expertise and more sophisticated regulatory skills. Future research should address the origins of these skills earlier in infancy, and the social contexts that generate them and support them during the intervening years.
Resumo:
There is an increase in the number of older adults 85 and over, who are choosing to live alone within the community. Moreover, older adults who live alone are reportedly spending an extensive amount of time alone within the home environment. In an effort to provide additional support and resources to older adults living in the community, a compliment of services are being offered through public and private organizations. These in-home supports focus on the instrumental or functional tasks of daily living, such as personal and rehabilitative care, nourishment, maintenance and upkeep of the home, as well as volunteer social visitation. However leisure resources and programs are not included among these services. Consequently, this creates a gap in leisure provision among this segment of the population. Throughout the life course, an individual's identity, role and purpose are developed and sustained through instrumental work roles in the formal and informal sector, as well as through personally meaningful leisure pastimes and experiences. Although roles shift post retirement, participation in instrumental and expressive activities can provide opportunities through which older adults are able to fulfill their need for agency (individuality and autonomy) and affiliation (social relatedness). Therefore barriers that inhibit instrumental or leisure experiences can negatively impact older adults' quality of life. This study explored the leisure lifestyles of four older adults, all of whom were over 85, lived alone within the community and were oriented to person, time and place. It became apparent that participants ordered their lives around a routine that consisted of instrumental, expressive and socially integrated tasks and activities. Moreover participants purposely chose to remain at home because their home environment facilitated freedom, choice and independence. As a result all four participants viewed their independence within the home as a critical determinant to their overall quality of life. Challenges associated with the home environment, participants' personal capacities and relationships were negotiated on a daily basis. Failure to positively adapt to these challenges inhibited meaningful engagement and personal fulfillment. Traditionally, leisure service delivery has been offered within institutions and through various community based venues. As a result leisure provision has been focused on the needs of the frail elderly who reside in institutions or the well elderly who are able to access leisure amenities within the community. However the growing number of older adults electing to live alone is on the rise. As individuals age the home becomes the preferred context for leisure experiences. If older adults are choosing to live alone, then both their instrumental and leisure needs must be addressed. As a result, it is imperative that leisure professionals extend the scope of service delivery to include home centered older adults.