[1] PRENTICE B E,KNOTTS R.Cargo airships:International competition[J].Journal of Transportation Technologies,2014,4:187-195.
[2] 赵达,刘东旭,孙康文,等.平流层飞艇研制现状、技术难点及发展趋势[J].航空学报,2016,37(1):45-56.ZHAO D,LIU D X,SUN K W,et al.Research status,technical difficulties and development trend of stratospheric airship[J].Acta Aeronautica et Astronautica Sinica,2016,37(1):45-56(in Chinese).
[3] 郭虓.平流层浮空器轨迹优化研究[D].北京:北京航空航天大学,2013:29-36.GUO X.Trajectory optimization research for stratospheric aerostat[D].Beijing:Beihang University,2013:29-36(in Chinese).
[4] KHOURY G A.Airship technology[M].New York:Cambridge University Press,2012:34-40.
[5] YANG Y,WU J,ZHENG W.Positioning control for an autonomous airship[J].Journal of Aircraft,2016,53(6):1638-1646.
[6] ZHENG Z W,ZHU M,SHI D L,et al.Hovering control for a stratospheric airship in unknown wind:AIAA-2014-0973[R].Reston:AIAA,2014.
[7] ZHENG Z,LIU L,ZHU M.Integrated guidance and control path following and dynamic control allocation for a stratospheric airship with redundant control systems[J].Proceedings of the Institution of Mechanical Engineers,Part G:Journal of Aerospace Engineering,2016,230(10):1813-1826.
[8] YANG Y,YAN Y,ZHU Z,et al.Positioning control for an unmanned airship using sliding mode control based on fuzzy approximation[J].Proceedings of the Institution of Mechanical Engineers,Part G:Journal of Aerospace Engineering,2014,228(14):2627-2640.
[9] ABBEEL P,COATES A,QUIGLEY M,et al.An application of reinforcement learning to aerobatic helicopter flight[C]//Advances in Neural Information Processing Systems,2007:1-8.
[10] 徐昕.增强学习与近似动态规划[M].北京:科学出版社,2010:18-27.XU X.Reinforcement learning and approximate dynamic programing[M].Beijing:Science Press,2010:18-27(in Chinese).
[11] PEARRE B,BROWN T X.Model-free trajectory optimization for unmanned aircraft serving as data ferries for widespread sensors[J].Remote Sensing,2012,4(10):2971-3005.
[12] RAGI S,CHONG E K P.UAV path planning in a dynamic environment via partially observable Markov decision process[J].IEEE Transactions on Aerospace and Electronic Systems,2013,49(4):2397-2412.
[13] DUNN C,VALASEK J,KIRKPATRICK K.Unmanned air system search and localization guidance using reinforcement learning:AIAA-2012-2589[R].Reston:AIAA,2012.
[14] ZHANG B,MAO Z,LIU W,et al.Geometric reinforcement learning for path planning of UAVs[J].Journal of Intelligent & Robotic Systems,2015,77(2):391-409.
[15] FAUST A.Reinforcement learning and planning for preference balancing tasks[J].AI Matters,2015,1(3):8-12.
[16] KO J,KLEIN D J,FOX D,et al.Gaussian processes and reinforcement learning for identification and control of an autonomous blimp[C]//Proceedings 2007 IEEE International Conference on Robotics and Automation.Piscataway,NJ:IEEE Press,2007:742-747.
[17] ROTTMANN A,PLAGEMANN C,HILGERS P,et al.Autonomous blimp control using model-free reinforcement learning in a continuous state and action space[C]//2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway,NJ:IEEE Press,2007:1895-1900.
[18] LIN C M,PENG Y F.Adaptive CMAC-based supervisory control for uncertain nonlinear systems[J].IEEE Transactions on Systems,Man,and Cybernetics,Part B(Cybernetics),2004,34(2):1248-1260.
[19] SCHMIDT D K.Modeling and near-space station keeping control of a large high-altitude airship[J].Journal of Guidance,Control,and Dynamics,2007,30(2):540-547.
[20] LS-S1200 UAV airship system overview parameters[EB/OL].[2017-12-18].http://www.lonsan.com.cn/english/Products_1.asp?oneclass=5&pid=13.
[21] ATAEI M,YOUSEFI-KOMA A.Three-dimensional optimal path planning for waypoint guidance of an autonomous underwater vehicle[J].Robotics and Autonomous Systems,2015,67:23-32. |