Zeitpunkt Nutzer Delta Tröts TNR Titel Version maxTL So 06.10.2024 00:00:13 9.927 +2 605.966 61,0 Climate Justice Social 4.2.1... 5.000 Sa 05.10.2024 00:00:29 9.925 0 605.309 61,0 Climate Justice Social 4.2.1... 5.000 Fr 04.10.2024 00:01:13 9.925 +2 604.544 60,9 Climate Justice Social 4.2.1... 5.000 Do 03.10.2024 00:00:13 9.923 0 603.966 60,9 Climate Justice Social 4.2.1... 5.000 Mi 02.10.2024 00:00:06 9.923 0 603.120 60,8 Climate Justice Social 4.2.1... 5.000 Di 01.10.2024 00:00:15 9.923 0 602.318 60,7 Climate Justice Social 4.2.1... 5.000 Mo 30.09.2024 00:01:14 9.923 0 601.459 60,6 Climate Justice Social 4.2.1... 5.000 So 29.09.2024 00:01:08 9.923 0 600.812 60,5 Climate Justice Social 4.2.1... 5.000 Sa 28.09.2024 00:01:08 9.923 +2 600.059 60,5 Climate Justice Social 4.2.1... 5.000 Fr 27.09.2024 00:01:08 9.921 0 599.585 60,4 Climate Justice Social 4.2.1... 5.000
Captain of the SS El Faro (@bangskij) · 08/2023 · Tröts: 4.952 · Folger: 396
So 06.10.2024 19:16
Speaking of "AI", while editing the recent concert it was filmed with two fixed cameras with no operators. One camera shot 1080p 50 and the other 2160p 23,976. In addition to this I was given numerous shot clips shot by audience members on their cellphones at at wide variety of resolutions, some in vertical format, all with variable framerate. I used Topaz video AI to normalize all the framerates to 23,976 and I used it to get rid of the worst compression artifacts on the cellphone footage. This allowed me to edit it as a faux multicamera shoot. For the fixed cameras I also used pan & scan to add camera movement. The sound came from the two fixed cameras' built in mics, combined the sound quality was kinda ok. After EQing and decompressing to the best of my ability with traditional tools(that RND decompressor is a gods end) I ran the two streams through online stem separators, this got me a somewhat isolated vocal track, somewhat clean drums and bass. Mixing it all together hid the individual flaws, gave the instruments some separation and let me restore even more dynamic range. Finally I ran the video track through a film grain plugin which hid most of the AI artifacts and remaining compression artifacts and lent a faux 16mm vibe I felt was suitable to this underground electronic band. All told, I got a pretty good concert video on zero budget. I have done this before on earlier concerts before these AI tools existed and then I wound up with passable 360p video and barely passable sound. This time I got passable 1080p video with quite nice sound. I'll post a link to the result once it's all been cleared. Is this all fake? Objectively - yes, but pragmatically - I'll use whatever tool gets the job done and 2D images are all just 3D space compressed anyway. Forced perspective just works - because 2d images are all fake and reality certainly doesn't run at 24FPS. Photorealism isn't a thing. Now, Open AI and the rest of the capitalist ghoul class can still go fuck themselves. Nuke the site from orbit etc.
[Öffentlich] Antw.: 0 Wtrl.: 0 Fav.: 0 · via Web