Ultimate++ 511 was released. U++ is a C++ cross-platform rapid application development suite, where “rapid” is achieved by the ‘smart and aggresive’ use of C++ features. The main improvement of the new version is the
Assist++ (code completion, browsing and transformation tool) in TheIDE. See an
overview of the library.
the overview link seems to resolve to http://www.microsoft.com
Try http://upp.sourceforge.net/www$uppweb$overview$en-us.html
is that a bug in mozilla/firefox? its weird. typing ‘http/’ followed by anything resolves to microsoft. but in opera it gives an error page.
conspiracy i think?
If Firefox can’t resolve the adress you have typed, it asks google for the most probably site to go to. If you search for http att google.com then you’ll find Microsoft on top of the result list.
Do a Google search for http. All will become clear 🙂
When you go to that link from the site, it works. The site clearly redirects you based on the referrer, to prevent deep linking.
C++ has the potential to be the most productive language in computing history. Its multiparadigm nature allows the effective development of almost any kind of software, from low level driver code to very high level business logic abstractions.
I must be missing something. What in the world did they base this on?
Unfortunately this potential has been left untapped, due to the lack of truly effective libraries, causing C++ evolution to be stuck somewhere between STL-iterator adaptors and smart-pointers. Ultimate++ finally uncovers this potential….
It’s doubtful it ever can be tapped if you’re using C++…
Rapid development is achieved by the smart and aggressive use of C++ rather than through fancy code generators. In this respect, U++ competes with popular scripting languages while preserving C/C++ runtime characteristics.
Unlikely. Certain features present in most higher level languages have to be implemented in the core language to be effective. Things like automatic memory management, closures, and dynamic types can all obviously be implemented on top, but they become much more cumbersome and lower-level in the process.
The U++ integrated development environment, TheIDE, introduces modular concepts to C++ programming. It features BLITZ-build technology to speedup C++ rebuilds up to 4 times, …
Interesting. I’ll have to take a look at this.
-bytecoder
From the site: Rapid development is achieved by the smart and aggressive use of C++ rather than through fancy code generators. In this respect, U++ competes with popular scripting languages while preserving C/C++ runtime characteristics.
bytecoder: Unlikely. Certain features present in most higher level languages have to be implemented in the core language to be effective. Things like automatic memory management, closures, and dynamic types can all obviously be implemented on top, but they become much more cumbersome and lower-level in the process.
They were comparing to the rapid development ability of scripting languages – automatic memory management, closures, and dynamic types are part of the runtime characteristics of those scripting languages. However, they have implemented something equivalent to dynamic types with their Value class, callbacks can do the job of closures, and memory management is handled by coding conventions that tend to avoid the use of the heap in a very structured way. Not quite the same thing, but it appears just as easy to use and should be faster at runtime.
However, they have implemented something equivalent to dynamic types with their Value class
Hardly. Value is just a type-safe wrapper on top of void*. Dynamic types don’t just allow you to pass around objects without knowing what they are, but use objects without knowing precisely what they are.
callbacks can do the job of closures
Callbacks can do only the job of functions as first class objects (and even then, the semantics in U++ are much more complicated than the equivalent in Lisp or Python). They cannot emulate the behavior of closures generally (which are more like class instances than callbacks).
and memory management is handled by coding conventions that tend to avoid the use of the heap in a very structured way
Very structured, very elaborate, and very complex. Just look at the volume of the semantics associated with stuff like movable types, pick behavior, etc. Why subject yourself to all that? The only reason any of it is necessary is because dealing with pointers is a PITA in C++, and value types result in a lot of unnecessary copying. The STL just accepts the nature of C++, and lives with the copying overhead, but U++ converts that into mental overhead for the programmer.
The lengths C++ programs go to to avoid leaking likes sieves is comical. Reference counting, specialized allocators, “smart pointers”, RAII, etc. Why not just use a GC and be done with it? By the time you factor in the overhead of all the management infrastructure that an average C++ program puts on top of malloc(), a GC doesn’t seem so slow after all!
Hardly. Value is just a type-safe wrapper on top of void*.
You can say the same thing about any Value-like object in any other language. Anyway, implementation is irrelevant.
The STL just accepts the nature of C++, and lives with the copying overhead, but U++ converts that into mental overhead for the programmer.
Mental overhead is quite low, but I can agree that learning curve is more steep (but much less steep than the one for STL/Boost combo).
Anyway, point you miss is that the real reason for avoiding GC is not performance (I know GC can be pretty fast, I guess it is generally faster than shared_ptr), but the fact that GC is unable to deal with any other resources than memory. If you arrange things well (like in U++ this makes for huge difference.
As for callbacks, 1:1 comparison is stupid there. C++ has different set of features and different ways of dealing with things.
Hardly. Value is just a type-safe wrapper on top of void*.
No, you can’t. Void* is just a way to pass around objects of unknown type. Dynamic typing is a whole lot more. In particular, dynamic typing requires some form of generic dispatch. Value is just a way to box up an object and pass it around. You still need to unbox it to use it.
Mental overhead is quite low, but I can agree that learning curve is more steep (but much less steep than the one for STL/Boost combo).
For a C++ programmer, maybe. For anybody who is used to actual productive languages, no it’s not. Passing around objects in Lisp has almost no semantics. Everything is a reference. In C++, you have three sets of semantics (reference, value, pointer), and U++ adds another set of its own.
As for callbacks, 1:1 comparison is stupid there. C++ has different set of features and different ways of dealing with things.
The claim was that callbacks subsume the uses of closures. Closures do way more than callbacks, and there is no good replacement for them in C++.
I don’t know what is cross plattform about it, when Mac OX is still left out, pah
“Q: How do I deal with memory leaks?
A: By writing code that doesn’t have any.”
(c) B. Stroustrup
could be used… e.g. Speeding up your development.
Say you have a project of 1000+ source code files, 20-30 or more programmers on the project. You work mostly on 10-20 files (say your gamecode, logic, or whatever). Instead of waiting everytime to compile and link and transfer to the console dev-kit (gamecube, ps2, ps3, whatever). You can simply continue working by just uploading the c++ byte code. Wait for the function you are changing to go out of the stack, and call the newer one. REPEAT. Or something along the lines.
Right now, in our projects (big gamedev studio) we wait sometimes for 2-3 minutes link times. Then you need to send the executable to the kit, wait for it to be loaded, wait for the symbols to be loaded. Then the game should start from the begining, then you continue doing your job.. INSTEAD of just continuously upload your recent changes. And zero-link, or incremental building does not work everywhere, or for everything. So it might be preffered.
I’m all for such approach, i’ve been trying to get ROOT/CINT, but it was rather complex (Ultimate++ looks easier). There is also AiC (which is just C).
I’m sorry but I’m afraid you’re comparing apples to oranges here…
The ROOT framework is designed by physicists for physicists (okay, biologists and mathematicians are welcomed tough! :-). It can probably be used for other things but it’s certainly not a general purpose framework…
Anyway, just my 2 eurocents…
Screenshot of the Linux version?
I have looked at the Ultimate code samples and I am convinced that it’s a great platform. I will certainly be considering it very carefully for my own coding in the future. A problem though is that their English is not so easy to read. Now I know full well how difficult foreign (human!!) languages are to learn, and all of them speak way better English than I speak any other language. However, they will need far better documentation in order for this platform to be succesful.
Yeah it’s a real shame. Ultimate++ would be alot more popular if they had better and more complete documentation.
Great!
How do I add a new class to my project?
Fine, GC might be fast enough, but what about memory footprint ?
Let’s be a little careful with terms. You probably don’t mean “memory footprint”, but rather “overall memory usage”. Compacting GCs tend to be better than manual memory management when it comes to footprint, because it tends to squeeze the active data into contiguous chunks. Most high-performance GCs, however, trade overall memory usage for speed by keeping lots of garbage data around. So the footprint is often quite small, but the overall usage can be very large.
That out of the way, let’s consider overall memory usage in context. We’re talking about a heavily-templated C++ library here. These things bloat binary size by a factor of two or three. You’re not going to be using something like this in an embedded situation. Given that, does memory usage even matter? As a user, I’d much rather buy RAM (which only costs like $80 a gig these days!) and get new features in my software sooner rather than save a negligible amount on RAM. Unlike CPU speeds, memory density hasn’t hit any walls yet. It continues to get cheaper at a frightening rate. In most fields, there is no point going to great lengths to conserve it.