461 resultados para Hamed, Amir


Relevância:

20.00% 20.00%

Publicador:

Resumo:

At head of title: Novela.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Active vibration control using time delay for a cantilever beam is developed in this paper. The equation of motion of the system is developed using the discrete standard formulation, and the discrete quadratic function is used to design the controller. The original contribution in this paper is using a genetic algorithm to determine the optimal time delay feedback for active vibration control of a cantilever beam. Simulations of the beam demonstrated that the genetic algorithm correctly identified the time delay which produced the quickest attenuation of unwanted vibrations for both mode one and mode two. In terms of frequency response, the optimal time delay for both modes reduced the resonant amplitude. In a mixed mode situation, the simulation demonstrated that an optimal time delay could be identified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

National culture is deeply rooted in values, which are learned and acquired when we are young (2007, p. 6), and „embedded deeply in everyday life. (Newman & Nollen, 1996, p. 754). Values have helped to shape us into who we are today. In other words, as we grow older, the cultural values we have learned and adapted to will mould our daily practices. This is reflected in our actions, behaviours, and the ways in which we communicate. Based on the previous assertion, it can be suggested that national culture may also influence organisational culture, as our „behaviour at work is a continuation of behaviour learned earlier. (Hofstede, 1991, p. 4). Cultural influence in an organisation could be evidenced by looking at communication practices: how employees interact with one another as they communicate in their daily practices. Earlier studies in organisational communication see communication as the heart of an organisation in which it serves, and as „the essence of organised activity and the basic process out of which all other functions derive. (Bavelas and Barret, cited in Redding, 1985, p. 7). Hence, understanding how culture influences communication will help with understanding organisational behaviour. This study was conducted to look at how culture values, which are referred to as culture dimensions in this thesis, influenced communication practices in an organisation that was going through a change process. A single case study was held in a Malaysian organisation, to investigate how Malaysian culture dimensions of respect, collectivism, and harmony were evidenced in the communication practices. Data was collected from twelve semi-structured interviews and five observation sessions. Guided by six attributes identified in the literature, (1) acknowledging seniority, knowledge and experience, 2) saving face, 3) showing loyalty to organisation and leaders, 4) demonstrating cohesiveness among members, 5) prioritising group interests over personal interests, and 6) avoiding confrontations of Malaysian culture dimensions, this study found eighteen communication practices performed by employees of the organisation. This research contributes to the previous cultural work, especially in the Malaysian context, in which evidence of Malaysian culture dimensions of respect, collectivism, and harmony were displayed in communication practices: 1) acknowledging the status quo, 2) obeying orders and directions, 3) name dropping, 4) keeping silent, 5) avoiding questioning, 6) having separate conversations, 7) adding, not criticising, 8) sugar coating, 9) instilling a sense of belonging, 10) taking sides, 11) cooperating, 12) sacrificing personal interest, 13) protecting identity, 14) negotiating, 15) saying „yes. instead of „no., 16) giving politically correct answers, 17) apologising, and 18) tolerating errors. Insights from this finding will help us to understand the organisational challenges that rely on communication, such as during organisational change. Therefore, data findings will be relevant to practitioners to understand the impact of culture on communication practices across countries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analyse the puzzling behavior of the volatility of individual stock returns around the turn of the Millennium. There has been much academic interest in this topic, but no convincing explanation has arisen. Our goal is to pull together the many competing explanations currently proposed in the literature to delermine which, if any, are capable of explaining the volatility trend. We find that many of the different explanations capture the same unusual trend around the Millennium. We find that many of the variables are very highly correlated and it is thus difficult to disentangle their relalive ability to exlplain the time-series behavior in volatility. It seems thai all of the variables that track average volatility well do so mainly by capturing changes in the post-1994 period. These variables have no time-series explanatory power in the pre-1995 years, questioning the underlying idea that any of the explanations currently plesented in the literature can track the trend in volatility over long periods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analyze the puzzling behavior of the volatility of individual stock returns over the past few decades. The literature has provided many different explanations to the trend in volatility and this paper tests the viability of the different explanations. Virtually all current theoretical arguments that are provided for the trend in the average level of volatility over time lend themselves to explanations about the difference in volatility levels between firms in the cross-section. We therefore focus separately on the cross-sectional and time-series explanatory power of the different proxies. We fail to find a proxy that is able to explain both dimensions well. In particular, we find that Cao et al. [Cao, C., Simin, T.T., Zhao, J., 2008. Can growth options explain the trend in idiosyncratic risk? Review of Financial Studies 21, 2599–2633] market-to-book ratio tracks average volatility levels well, but has no cross-sectional explanatory power. On the other hand, the low-price proxy suggested by Brandt et al. [Brandt, M.W., Brav, A., Graham, J.R., Kumar, A., 2010. The idiosyncratic volatility puzzle: time trend or speculative episodes. Review of Financial Studies 23, 863–899] has much cross-sectional explanatory power, but has virtually no time-series explanatory power. We also find that the different proxies do not explain the trend in volatility in the period prior to 1995 (R-squared of virtually zero), but explain rather well the trend in volatility at the turn of the Millennium (1995–2005).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Findings from an online survey conducted by Queensland University of Technology (QUT) shows that Australia is suffering from a lack of data reflecting trip generation for use in Traffic Impact Assessments (TIAs). Current independent variables for trip generation estimation are not able to create robust outcomes as well. It is also challenging to account for the impact of the new development on public and active transport as well as the effect of trip chaining behaviour in Australian TIA studies. With this background in mind, research is being implemented by QUT to find a new approach developing a combined model of trip generation and mode choice with consideration of trip chaining effects. It is expected that the model will provide transferable outcomes as it is developed based on socio-demographic parameters. Child Care Centres within the Brisbane area have been nominated for model development. At the time, the project is in the data collection phase. Findings from the pilot survey associated with capturing trip chaining and mode choice information reveal that applying questionnaire is able to capture required information in an acceptable level. The result also reveals that several centres within an area should be surveyed in order to provide sufficient data for trip chaining and modal split analysis.