3 resultados para global virtual engineering teams (GVETs)
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
In this analysis, using available hourly and daily radiometric data performed at Botucatu, Brazil, several empirical models relating ultraviolet (UV), photosynthetically active (PAR) and near infrared (NIR) solar global components with solar global radiation (G) are established. These models are developed and discussed through clearness index K(T) (ratio of the global-to-extraterrestrial solar radiation). Results obtained reveal that the proposed empirical models predict hourly and daily values accurately. Finally. the overall analysis carried Out demonstrates that the sky conditions are more important in developing correlation models between the UV component and the global solar radiation. The linear regression models derived to estimate PAR and NIR components may be obtained without sky condition considerations within a maximum variation of 8%. In the case of UV, not taking into consideration the sky condition may cause a discrepancy of up to 18% for hourly values and 15% for daily values. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the epsilon(k)-global minimization of the Augmented Lagrangian with simple constraints, where epsilon(k) -> epsilon. Global convergence to an epsilon-global minimizer of the original problem is proved. The subproblems are solved using the alpha BB method. Numerical experiments are presented.
Resumo:
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768 x 576 with several moving objects at about 11 fps. (C)2011 Elsevier Ltd. All rights reserved.