971 resultados para Non-commercial film distribution
Resumo:
Both animal and human studies suggest that the efficiency with which we are able to grasp objects is attributable to a repertoire of motor signals derived directly from vision. This is in general agreement with the long-held belief that the automatic generation of motor signals by the perception of objects is based on the actions they afford. In this study, we used magnetoencephalography (MEG) to determine the spatial distribution and temporal dynamics of brain regions activated during passive viewing of object and non-object targets that varied in the extent to which they afforded a grasping action. Synthetic Aperture Magnetometry (SAM) was used to localize task-related oscillatory power changes within specific frequency bands, and the time course of activity within given regions-of-interest was determined by calculating time-frequency plots using a Morlet wavelet transform. Both single subject and group-averaged data on the spatial distribution of brain activity are presented. We show that: (i) significant reductions in 10-25 Hz activity within extrastriate cortex, occipito-temporal cortex, sensori-motor cortex and cerebellum were evident with passive viewing of both objects and non-objects; and (ii) reductions in oscillatory activity within the posterior part of the superior parietal cortex (area Ba7) were only evident with the perception of objects. Assuming that focal reductions in low-frequency oscillations (< 30 Hz) reflect areas of heightened neural activity, we conclude that: (i) activity within a network of brain areas, including the sensori-motor cortex, is not critically dependent on stimulus type and may reflect general changes in visual attention; and (ii) the posterior part of the superior parietal cortex, area Ba7, is activated preferentially by objects and may play a role in computations related to grasping. © 2006 Elsevier Inc. All rights reserved.
Resumo:
Financial institutes are an integral part of any modern economy. In the 1970s and 1980s, Gulf Cooperation Council (GCC) countries made significant progress in financial deepening and in building a modern financial infrastructure. This study aims to evaluate the performance (efficiency) of financial institutes (banking sector) in GCC countries. Since, the selected variables include negative data for some banks and positive for others, and the available evaluation methods are not helpful in this case, so we developed a Semi Oriented Radial Model to perform this evaluation. Furthermore, since the SORM evaluation result provides a limited information for any decision maker (bankers, investors, etc...), we proposed a second stage analysis using classification and regression (C&R) method to get further results combining SORM results with other environmental data (Financial, economical and political) to set rules for the efficient banks, hence, the results will be useful for bankers in order to improve their bank performance and to the investors, maximize their returns. Mainly there are two approaches to evaluate the performance of Decision Making Units (DMUs), under each of them there are different methods with different assumptions. Parametric approach is based on the econometric regression theory and nonparametric approach is based on a mathematical linear programming theory. Under the nonparametric approaches, there are two methods: Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH). While there are three methods under the parametric approach: Stochastic Frontier Analysis (SFA); Thick Frontier Analysis (TFA) and Distribution-Free Analysis (DFA). The result shows that DEA and SFA are the most applicable methods in banking sector, but DEA is seem to be most popular between researchers. However DEA as SFA still facing many challenges, one of these challenges is how to deal with negative data, since it requires the assumption that all the input and output values are non-negative, while in many applications negative outputs could appear e.g. losses in contrast with profit. Although there are few developed Models under DEA to deal with negative data but we believe that each of them has it is own limitations, therefore we developed a Semi-Oriented-Radial-Model (SORM) that could handle the negativity issue in DEA. The application result using SORM shows that the overall performance of GCC banking is relatively high (85.6%). Although, the efficiency score is fluctuated over the study period (1998-2007) due to the second Gulf War and to the international financial crisis, but still higher than the efficiency score of their counterpart in other countries. Banks operating in Saudi Arabia seem to be the highest efficient banks followed by UAE, Omani and Bahraini banks, while banks operating in Qatar and Kuwait seem to be the lowest efficient banks; this is because these two countries are the most affected country in the second Gulf War. Also, the result shows that there is no statistical relationship between the operating style (Islamic or Conventional) and bank efficiency. Even though there is no statistical differences due to the operational style, but Islamic bank seem to be more efficient than the Conventional bank, since on average their efficiency score is 86.33% compare to 85.38% for Conventional banks. Furthermore, the Islamic banks seem to be more affected by the political crisis (second Gulf War), whereas Conventional banks seem to be more affected by the financial crisis.
Resumo:
We have investigated the microstructure and bonding of two biomass-based porous carbon chromatographic stationary phase materials (alginic acid-derived Starbon® and calcium alginate-derived mesoporous carbon spheres (AMCS) and a commercial porous graphitic carbon (PGC), using high resolution transmission electron microscopy, electron energy loss spectroscopy (EELS), N2 porosimetry and X-ray photoelectron spectroscopy (XPS). The planar carbon sp -content of all three material types is similar to that of traditional nongraphitizing carbon although, both biomass-based carbon types contain a greater percentage of fullerene character (i.e. curved graphene sheets) than a non-graphitizing carbon pyrolyzed at the same temperature. This is thought to arise during the pyrolytic breakdown of hexauronic acid residues into C5 intermediates. Energy dispersive X-ray and XPS analysis reveals a homogeneous distribution of calcium in the AMCS and a calcium catalysis mechanism is discussed. That both Starbon® and AMCS, with high-fullerene character, show chromatographic properties similar to those of a commercial PGC material with extended graphitic stacks, suggests that, for separations at the molecular level, curved fullerene- like and planar graphitic sheets are equivalent in PGC chromatography. In addition, variation in the number of graphitic layers suggests that stack depth has minimal effect on the retention mechanism in PGC chromatography. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Atomisation of an aqueous solution for tablet film coating is a complex process with multiple factors determining droplet formation and properties. The importance of droplet size for an efficient process and a high quality final product has been noted in the literature, with smaller droplets reported to produce smoother, more homogenous coatings whilst simultaneously avoiding the risk of damage through over-wetting of the tablet core. In this work the effect of droplet size on tablet film coat characteristics was investigated using X-ray microcomputed tomography (XμCT) and confocal laser scanning microscopy (CLSM). A quality by design approach utilising design of experiments (DOE) was used to optimise the conditions necessary for production of droplets at a small (20 μm) and large (70 μm) droplet size. Droplet size distribution was measured using real-time laser diffraction and the volume median diameter taken as a response. DOE yielded information on the relationship three critical process parameters: pump rate, atomisation pressure and coating-polymer concentration, had upon droplet size. The model generated was robust, scoring highly for model fit (R2 = 0.977), predictability (Q2 = 0.837), validity and reproducibility. Modelling confirmed that all parameters had either a linear or quadratic effect on droplet size and revealed an interaction between pump rate and atomisation pressure. Fluidised bed coating of tablet cores was performed with either small or large droplets followed by CLSM and XμCT imaging. Addition of commonly used contrast materials to the coating solution improved visualisation of the coating by XμCT, showing the coat as a discrete section of the overall tablet. Imaging provided qualitative and quantitative evidence revealing that smaller droplets formed thinner, more uniform and less porous film coats.
Resumo:
The main goal of this thesis is to show the versatility of glancing angle deposition (GLAD) thin films in applications. This research is first focused on studying the effect of select deposition variables in GLAD thin films and secondly, to demonstrate the flexibility of GLAD films to be incorporated in two different applications: (1) as a reflective coating in low-level concentration photovoltaic systems, and (2) as an anode structure in dye-sensitized solar cells (DSSC). A particular type of microstructure composed of tilted micro-columns of titanium is fabricated by GLAD. The microstructures form elongated and fan-like tilted micro-columns that demonstrate anisotropic scattering. The thin films texture changes from fiber texture to tilted fiber texture by increasing the vapor incidence angle. At very large deposition angles, biaxial texture forms. The morphology of the thin films deposited under extreme shadowing condition and at high temperature (below recrystallization zone) shows a porous and inclined micro-columnar morphology, resulting from the dominance of shadowing over adatom surface diffusion. The anisotropic scattering behavior of the tilted Ti thin film coatings is quantified by bidirectional reflectance distribution function (BRDF) measurements and is found to be consistent with reflectance from the microstructure acting as an array of inclined micro-mirrors that redirect the incident light in a non-specular reflection. A silver-coating of the surface of the tilted-Ti micro-columns is performed to enhance the total reflectance of the Ti-thin films while keeping the anisotropic scattering behavior. By using such coating is as a booster reflector in a laboratory-scale low-level concentration photovoltaic system, the short-circuit current of the reference silicon solar cell by 25%. Finally, based on the scattering properties of the tilted microcolumnar microstructure, its scattering effect is studied as a part of titanium dioxide microstructure for the anode in DSSCs. GLAD-fabricated TiO2 microstructures for the anode in a DSSC, consisting of vertical micro-columns, and combined vertical topped with tilted micro-columns are compared. The solar cell with the two-part microstructure shows the highest monochromatic incident photon to current efficiency with 20% improvement compared to the vertical microstructure, and the efficiency of the cell increases from 1.5% to 2% due to employing the scattering layer.
Resumo:
Advertising investment and audience figures indicate that television continues to lead as a mass advertising medium. However, its effectiveness is questioned due to problems such as zapping, saturation and audience fragmentation. This has favoured the development of non-conventional advertising formats. This study provides empirical evidence for the theoretical development. This investigation analyzes the recall generated by four non-conventional advertising formats in a real environment: short programme (branded content), television sponsorship, internal and external telepromotion versus the more conventional spot. The methodology employed has integrated secondary data with primary data from computer assisted telephone interviewing (CATI) were performed ad-hoc on a sample of 2000 individuals, aged 16 to 65, representative of the total television audience. Our findings show that non-conventional advertising formats are more effective at a cognitive level, as they generate higher levels of both unaided and aided recall, in all analyzed formats when compared to the spot.
Resumo:
This proposal is a non-quantitative study based on a corpus of real data which offers a principled account of the translation strategies employed in the translation of English film titles into Spanish in terms of cognitive modeling. More specifically, we draw on Ruiz de Mendoza and Galera’s (2014) work on what they term content (or low-level) cognitive operations, based on either ‘stands for’ or ‘identity’ relations, in order to investigate possible motivating factors for translations which abide by oblique procedures, i.e. for non-literal renderings of source titles. The present proposal is made in consonance with recent findings within the framework of Cognitive Linguistics (Samaniego 2007), which evidence that this linguistic approach can fruitfully address some relevant issues in Translation Studies, the most outstanding for our purposes being the exploration of the cognitive operations which account for the use of translation strategies (Rojo and Ibarretxe-Antuñano 2013: 10), mainly expansion and reduction operations, parameterization, echoing, mitigation and comparison by contrast. This fits in nicely with a descriptive approach to translation and particularly with skopos theory, whose main aim consists in achieving functionally adequate renderings of source texts.
Resumo:
We numerically analyse the behavior of the full distribution of collective observables in quantum spin chains. While most of previous studies of quantum critical phenomena are limited to the first moments, here we demonstrate how quantum fluctuations at criticality lead to highly non-Gaussian distributions. Interestingly, we show that the distributions for different system sizes collapse on thesame curve after scaling for a wide range of transitions: first and second order quantum transitions and transitions of the Berezinskii–Kosterlitz–Thouless type. We propose and analyse the feasibility of an experimental reconstruction of the distribution using light–matter interfaces for atoms in optical lattices or in optical resonators.
Resumo:
The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.
Resumo:
Secure transmission of bulk data is of interest to many content providers. A commercially-viable distribution of content requires technology to prevent unauthorised access. Encryption tools are powerful, but have a performance cost. Without encryption, intercepted data may be illicitly duplicated and re-sold, or its commercial value diminished because its secrecy is lost. Two technical solutions make it possible to perform bulk transmissions while retaining security without too high a performance overhead. These are: 1. a) hierarchical encryption - the stronger the encryption, the harder it is to break but also the more computationally expensive it is. A hierarchical approach to key exchange means that simple and relatively weak encryption and keys are used to encrypt small chunks of data, for example 10 seconds of video. Each chunk has its own key. New keys for this bottom-level encryption are exchanged using a slightly stronger encryption, for example a whole-video key could govern the exchange of the 10-second chunk keys. At a higher level again, there could be daily or weekly keys, securing the exchange of whole-video keys, and at a yet higher level, a subscriber key could govern the exchange of weekly keys. At higher levels, the encryption becomes stronger but is used less frequently, so that the overall computational cost is minimal. The main observation is that the value of each encrypted item determines the strength of the key used to secure it. 2. b) non-symbolic fragmentation with signal diversity - communications are usually assumed to be sent over a single communications medium, and the data to have been encrypted and/or partitioned in whole-symbol packets. Network and path diversity break up a file or data stream into fragments which are then sent over many different channels, either in the same network or different networks. For example, a message could be transmitted partly over the phone network and partly via satellite. While TCP/IP does a similar thing in sending different packets over different paths, this is done for load-balancing purposes and is invisible to the end application. Network and path diversity deliberately introduce the same principle as a secure communications mechanism - an eavesdropper would need to intercept not just one transmission path but all paths used. Non-symbolic fragmentation of data is also introduced to further confuse any intercepted stream of data. This involves breaking up data into bit strings which are subsequently disordered prior to transmission. Even if all transmissions were intercepted, the cryptanalyst still needs to determine fragment boundaries and correctly order them. These two solutions depart from the usual idea of data encryption. Hierarchical encryption is an extension of the combined encryption of systems such as PGP but with the distinction that the strength of encryption at each level is determined by the "value" of the data being transmitted. Non- symbolic fragmentation suppresses or destroys bit patterns in the transmitted data in what is essentially a bit-level transposition cipher but with unpredictable irregularly-sized fragments. Both technologies have applications outside the commercial and can be used in conjunction with other forms of encryption, being functionally orthogonal.
Resumo:
The steam turbines play a significant role in global power generation. Especially, research on low pressure (LP) steam turbine stages is of special importance for steam turbine man- ufactures, vendors, power plant owners and the scientific community due to their lower efficiency than the high pressure steam turbine stages. Because of condensation, the last stages of LP turbine experience irreversible thermodynamic losses, aerodynamic losses and erosion in turbine blades. Additionally, an LP steam turbine requires maintenance due to moisture generation, and therefore, it is also affecting on the turbine reliability. Therefore, the design of energy efficient LP steam turbines requires a comprehensive analysis of condensation phenomena and corresponding losses occurring in the steam tur- bine either by experiments or with numerical simulations. The aim of the present work is to apply computational fluid dynamics (CFD) to enhance the existing knowledge and understanding of condensing steam flows and loss mechanisms that occur due to the irre- versible heat and mass transfer during the condensation process in an LP steam turbine. Throughout this work, two commercial CFD codes were used to model non-equilibrium condensing steam flows. The Eulerian-Eulerian approach was utilised in which the mix- ture of vapour and liquid phases was solved by Reynolds-averaged Navier-Stokes equa- tions. The nucleation process was modelled with the classical nucleation theory, and two different droplet growth models were used to predict the droplet growth rate. The flow turbulence was solved by employing the standard k-ε and the shear stress transport k-ω turbulence models. Further, both models were modified and implemented in the CFD codes. The thermodynamic properties of vapour and liquid phases were evaluated with real gas models. In this thesis, various topics, namely the influence of real gas properties, turbulence mod- elling, unsteadiness and the blade trailing edge shape on wet-steam flows, are studied with different convergent-divergent nozzles, turbine stator cascade and 3D turbine stator-rotor stage. The simulated results of this study were evaluated and discussed together with the available experimental data in the literature. The grid independence study revealed that an adequate grid size is required to capture correct trends of condensation phenomena in LP turbine flows. The study shows that accurate real gas properties are important for the precise modelling of non-equilibrium condensing steam flows. The turbulence modelling revealed that the flow expansion and subsequently the rate of formation of liquid droplet nuclei and its growth process were affected by the turbulence modelling. The losses were rather sensitive to turbulence modelling as well. Based on the presented results, it could be observed that the correct computational prediction of wet-steam flows in the LP turbine requires the turbulence to be modelled accurately. The trailing edge shape of the LP turbine blades influenced the liquid droplet formulation, distribution and sizes, and loss generation. The study shows that the semicircular trailing edge shape predicted the smallest droplet sizes. The square trailing edge shape estimated greater losses. The analysis of steady and unsteady calculations of wet-steam flow exhibited that in unsteady simulations, the interaction of wakes in the rotor blade row affected the flow field. The flow unsteadiness influenced the nucleation and droplet growth processes due to the fluctuation in the Wilson point.
Resumo:
Effective management of invasive fishes depends on the availability of updated information about their distribution and spatial dispersion. Forensic analysis was performed using online and published data on the European catfish, Silurus glanis L., a recent invader in the Tagus catchment (Iberian Peninsula). Eighty records were obtained mainly from anglers’ fora and blogs, and more recently from www.youtube.com. Since the first record in 1998, S. glanis expanded its geographic range by 700 km of river network, occurring mainly in reservoirs and in high-order reaches. Human-mediated and natural dispersal events were identified, with the former occurring during the first years of invasion and involving movements of >50 km. Downstream dispersal directionality was predominant. The analysis of online data from anglers was found to provide useful information on the distribution and dispersal patterns of this non-native fish, and is potentially applicable as a preliminary, exploratory assessment tool for other non-native fishes.
Resumo:
A Non-Indigenous Species (NIS) is defined as an organism, introduced outside its natural past or present range of distribution by humans, that successfully survives, reproduces, and establish in the new environment. Harbors and tourist marinas are considered NIS hotspots, as they are departure and arrival points for numerous vessels and because of the presence of free artificial substrates, which facilitate colonization by NIS. To early detect the arrival of new NIS, monitoring benthic communities in ports is essential. Autonomous Reef Monitoring Structures (ARMS) are standardized passive collectors that are used to assess marine benthic communities. Here we use an integrative approach based on multiple 3-month ARMS deployment (from April 2021 to October 2022) to characterize the benthic communities (with a focus on NIS) of two sites: a commercial port (Harbor) and a touristic Marina (Marina) of Ravenna. The colonizing sessile communities were assessed using percentage coverage of the taxa trough image analyses and vagile fauna (> 2 mm) was identified morphologically using a stereomicroscope and light microscope. Overall, 97 taxa were identified and 19 of them were NIS. All NIS were already observed in port environments in the Mediterranean Sea, but for the first time the presence of the polychaete Schistomeringos cf. japonica (Annenkova, 1937) was observed; however molecular analysis is needed to confirm its identity. Harbor and Marina host significantly different benthic communities, with significantly different abundance depending on the sampling period. While the differences between sites are related to their different environmental characteristic and their anthropogenic pressures, differences among times seems related to the different life cycle of the main abundant species. This thesis evidenced that ARMS, together with integrative taxonomic approaches, represent useful tools to early detect NIS and could be used for a long-term monitoring of their presence.
Resumo:
The biochemical responses of the enzymatic antioxidant system of a drought-tolerant cultivar (IACSP 94-2094) and a commercial cultivar in Brazil (IACSP 95-5000) grown under two levels of soil water restriction (70% and 30% Soil Available Water Content) were investigated. IACSP 94-2094 exhibited one additional active superoxide dismutase (Cu/Zn-SOD VI) isoenzyme in comparison to IACSP 95-5000, possibly contributing to the heightened response of IACSP 94-2094 to the induced stress. The total glutathione reductase (GR) activity increased substantially in IACSP 94-2094 under conditions of severe water stress; however, the appearance of a new GR isoenzyme and the disappearance of another isoenzyme were found not to be related to the stress response because the cultivars from both treatment groups (control and water restrictions) exhibited identical changes. Catalase (CAT) activity seems to have a more direct role in H2O2 detoxification under water stress condition and the shift in isoenzymes in the tolerant cultivar might have contributed to this response, which may be dependent upon the location where the excessive H2O2 is being produced under stress. The improved performance of IACSP 94-2094 under drought stress was associated with a more efficient antioxidant system response, particularly under conditions of mild stress.
Resumo:
Genetically modified foods are a major concern around the world due to the lack of information concerning their safety and health effects. This work evaluates differences, at the proteomic level, between two types of crop samples: transgenic (MON810 event with the Cry1Ab gene, which confers resistance to insects) and non-transgenic maize flour commercialized in Brazil. The 2-D DIGE technique revealed 99 differentially expressed spots, which were collected in 2-D PAGE gels and identified via mass spectrometry (nESI-QTOF MS/MS). The abundance of protein differences between the transgenic and non-transgenic samples could arise from genetic modification or as a result of an environmental influence pertaining to the commercial sample. The major functional category of proteins identified was related to disease/defense and, although differences were observed between samples, no toxins or allergenic proteins were found.