Category Archives: OOP

What’s all the hype about the heap and Stack?

The terms Stack and Heap get thrown around a lot in the programming world. But what does it means? What is the difference between them and why should we care?

Introduction

To start, I would like to make a clarification on a possible misconception. Heap is referred here as a memory allocation technique, not as a data structure. Furthermore, when we speak about Heap as opposed to Stack, we do not mean two types of memory, what we mean is two ways of allocating memory.

When your code compiles (or run if interpreted), the compiler needs to consider how to allocate your variable definitions in memory. To understand this process, let’s first examine the pros and cons of each type of allocation.

Continue reading What’s all the hype about the heap and Stack?

Effortless Class Diagrams for all your golang needs

A picture is worth a thousand words.

If you agree that nothing paints a better picture of your software project like a well maintained UML class diagram, then this post is for you.

Motivation

I have been fascinated with Golang because of the versatility of the language. I wanted to take advantage of the Golang parser and a great software called PlantUML (http://plantuml.com/) to create a program that will translate my Golang code into a neat class diagram.

Continue reading Effortless Class Diagrams for all your golang needs

An easy way to make MVC great again!

If you are a professional iOS or Android developer and you explicitly manifest your love for MVC as base architecture for your Apps, you are a leper and would be ostracized. 

Nowadays the MVC is frowned upon and not in the favor of altruistic developers.  

Viper, React, MVVM, those are good, MVC is bad and crappy… 

Truth or myth? 

Well, MVC is old indeed and certainty it got its flaws, most common of all, the feared “massive view controllers”. 

 But although the new aforementioned architectures bring to the table some solutions to MVC intrinsic issues, they do have some flaws as well, yeah “nobody is perfect”.

MVC is great, programmers are just too careless.

Continue reading An easy way to make MVC great again!

iOS Singletons – Objective-C

A singleton is a special kind of class where only one instance of the class exists for the current process. (In the case of an iOS app, the one instance is shared across the entire app.) Some examples in UIKit are [UIApplication sharedApplication] (which returns the sole instance of the application itself), and [NSFileManager defaultManager] (which returns the file manager instance). Singletons can be an easy way to share data and common methods across your entire app.

Rather than create instances of the singleton class using alloc/init, you’ll call a class method that will return the singleton object. You can name the class method anything, but common practice is to call it sharedName or defaultName.

Header

Implementation

Swift, Apple’s new programming language

If anyone outside Apple saw Swift coming, they certainly weren’t making any public predictions. In the middle of a keynote filled with the sorts of announcements you’d expect (even if the details were a surprise), Apple this week announced that it has created a modern replacement for the Objective-C, a programming language the company has used since shortly after Steve Jobs founded NeXT.

Swift wasn’t a “sometime before the year’s out”-style announcement, either. The same day, a 550-page language guide appeared in the iBooks store. Developers were also given access to Xcode 6 betas, which allow application development using the new language. Whatever changes were needed to get the entire Cocoa toolkit to play nice with Swift are apparently already done.

While we haven’t yet produced any Swift code, we have read the entire language guide and looked at the code samples Apple provided. What follows is our first take on the language itself, along with some ideas about what Apple hopes to accomplish.

Why were we using Objective-C?

When NeXT began, object-oriented programming hadn’t been widely adopted, and few languages available even implemented it. At the time, then, Objective-C probably seemed like a good choice, one that could incorporate legacy C code and programming habits while adding a layer of object orientation on top.

FURTHER READING

APPLE SHOWS OFF SWIFT, ITS NEW PROGRAMMING LANGUAGE
Successor to Objective C has “none of the baggage of C.”
But as it turned out, NeXT was the only major organization to adopt the language. This had some positive aspects, as the company was able to build its entire development environment around the strengths of Objective-C. In turn, anyone who bought in to developing in the language ended up using NeXT’s approach. For instance, many “language features” of Objective-C aren’t actually language features at all; they are implemented by NeXT’s base class, NSObject. And some of the design patterns in Cocoa, like the existence of delegates, require the language introspection features of Objective-C, which were used to safely determine if an object will respond to a specific message.

The downside of narrow Objective-C adoption was that it forced the language into a niche. When Apple inherited Objective-C, it immediately set about giving developers an alternative in the form of the Carbon libraries, since these enabled a more traditional approach to Mac development.

Things changed with the runaway popularity of the iPhone SDK, which only allowed development in Objective-C. Suddenly, a lot of developers used Objective-C, and many of them already had extensive experience in other programming languages. This was great for Apple, but it caused a bit of strain. Not every developer was entirely happy with Objective-C as a language, and Apple then compounded this problem by announcing that the future of Mac development was Cocoa, the Objective-C frameworks.

What’s wrong with Objective-C?

Objective-C has served Apple incredibly well. By controlling the runtime and writing its own compiler, the company has been able to stave off some of the language limitations it inherited from NeXT and add new features, like properties, a garbage collector, and the garbage collector’s replacement, Automatic Reference Counting.

But some things really couldn’t be changed. Because it was basically C with a few extensions, Objective-C was limited to using C’s method of keeping track of complex objects: pointers, which are essentially the memory address occupied by the first byte of an object. Everything, from an instance of NSString to the most complex table view, was passed around and messaged using its pointer.

For the most part, this didn’t pose problems. It was generally possible to write complex applications without ever being reminded that everything you were doing involved pointers. But it was also possible to screw up and try to access the wrong address in memory, causing a program to crash or opening a security hole. The same holds true for a variety of other features of C; developers either had to do careful bounds and length checking or their code could wander off into random places in memory.

Beyond such pedestrian problems, Objective-C simply began showing its age. Over time, other languages adopted some great features that were difficult to graft back onto a language like C. One example is what’s termed a “generic.” In C, if you want to do the same math with integers and floating point values, you have to write a separate function for each—and other functions for unsigned long integers, double-precision floating points, etc. With generics, you can write a single function that handles everything the compiler recognizes as a number.

Apple clearly could add some significant features to the Objective-C syntax—closures are one example—but it’s not clear that it could have added everything it wanted. And the very nature of C meant that the language would always be inherently unsafe, with stability and security open to compromise by a single sloppy coder. Something had to change.

But why not take the easy route and adopt another existing language? Because of the close relationship between Objective-C and the Cocoa frameworks, Objective-C enabled the sorts of design patterns that made the frameworks effective. Most of the existing, mainstream alternatives didn’t provide such a neat fit for the existing Cocoa frameworks. Hence, Swift.

Is it any good?

Swift isn’t a radical departure in many ways. Apple likes certain design patterns, and it constructed Objective-C and Cocoa to encourage them. Swift does the same thing, going further toward formalizing some of the patterns that have been adopted in a somewhat haphazard way (like properties). Most of the features Swift adds already exist in other programming languages, and these will be familiar to many developers. The features that have been added are generally good ones, while the things that have been taken away (like pointer math) were generally best avoided anyway.

In that sense, Swift is a nice, largely incremental change from Objective-C. All the significant changes are in the basic syntax. Use semicolons and parentheses—or don’t, it doesn’t matter. Include the method signature in the function call—but only if you feel like it. In these and many other cases, Swift lets you choose a syntax and style you’re comfortable with, in many cases allowing you to minimize typing if you choose to.

Most of the new features have been used in other languages, the syntax changes get rid of a lot of Objective-C’s distinctiveness, and you’re often able to write equivalent code using very different syntax. All of this enables Swift to look familiar to a lot of people who are familiar with other languages. That sort of rapport has become more important as Apple attracts developers who’d never even touched C before. These people will still have to learn to work with the design patterns of Apple’s frameworks, but at least they won’t be facing a language that’s intimidatingly foreign at the same time.

In general, these things seem like positives. If Apple chose a single style, then chances were good that a number of its choices wouldn’t be ones we’d favor. But with the flexibility, we’ll still be able to work close to the way we’d want.

Close, but not exactly. There are a couple of specific syntax features I’m personally not a fan of and a number of cases where a single character can make a radical difference to the meaning of a line of code. Combined, the syntax changes could make managing large projects and multiple developers harder than it has been with Objective-C.

What’s Apple up to?

For starters, it’s doing the obvious. Swift makes a lot of common errors harder and a number of bad practices impossible. If you choose, you can write code in Swift pretty tersely, which should make things easier for developers. It adds some nice new features that should make said developers more productive. All of those are good things.

More generally, though, Apple is probably mildly annoyed with people like me. I spent time getting good at using autorelease pools, my apps didn’t leak memory, and I didn’t see the point in learning the vagaries of the syntax required to make sure Automatic Reference Counting didn’t end up with circular references that couldn’t be reclaimed. I wasn’t a huge fan of the dot notation for accessing properties, so I only used it when it couldn’t be avoided. In short, I was a dinosaur in waiting.

People like me are why the runtime and compiler teams can’t have nice things. If everybody’s using the same features, it’s easier to get rid of legacy support and optimize the hell out of everything that’s left. A smaller memory footprint and better performance mean lower component costs and better battery life, which are very good things for the company.

Apple promised better performance with Swift, and you can see some places where it might extract a bit. Constants are a big part of Swift, which makes sense. If you make a stock-tracking app, the price may change every second, but the stock’s name and symbol change so rarely that it’s just as easy to make a whole new object when this happens. Declare the name and symbol constants, and you can skip all the code required to change them in a thread-safe manner. Presumably, the compiler can manage some optimizations around the use of constants as well.

Unlike the dinosaurs, we can see Chicxulub coming. Two or three years from now, when Apple announces that the future is Swift and it’s ready to drop Objective-C, we won’t be at all surprised. And I won’t be at all upset, because I’ll have spent the intervening few years making sure I know how to use the new language.

Pointers I

Un puntero es una variable que contiene la dirección de memoria de un dato o de otra variable que contiene el dato. Quiere esto decir, que el puntero apunta al espacio físico donde está el dato o la variable.

Su sintaxis de declaración sería:
tipo *NombrePuntero;

Donde tipo es el tipo de dato al que referenciará este puntero, es decir, que si se necesita guardar la dirección de memoria de un dato int, se necesita un puntero de tipo int.

int a=0; //Declaración de variable entera de tipo entero
int *puntero; //Declaración de variable puntero de tipo entero
puntero = &a; //Asignación de la dirección memoria de a

El operador *, nos permite acceder al valor de la dirección que almacena el puntero, en este caso nos permite acceder al valor que contiene a. De esta forma “a” y “*puntero” muestran el mismo dato, pero esto no quiere decir que sea lo mismo, uno es un entero el otro un puntero.

Operadores

Operador addresof (&): Este devuelve la dirección en memoria de una variable que le pasamos como parámetro. Funciona a priori como una función, cuyo retorno es una dirección de memoria.

Operador de Indirección (*): Ademas de que nos permite declarar un tipo de dato puntero, también nos permite ver el VALOR que está en la dirección a la que apunta el puntero.

No confundir con type * name, lo que sería una declaración de puntero, ni 2*2, lo que sería una operación aritmética, en este caso la multiplicación.

En C++ el símbolo de * está pluriempleado.  :mrgreen:

Javadocs

Javadoc es una utilidad de Oracle para la generación de documentación de APIs en formato HTML a partir de código fuente Java. Javadoc es el estándar de la industria para documentar clases de Java. La mayoría de los IDEs los generan automáticamente.


Javadoc también proporciona una API para crear doclets y taglets, que le permite analizar la estructura de una aplicación Java. Así es como JDiff puede generar informes de lo que ha cambiado entre dos versiones de una API.


Para generar APIs con Javadoc han de usarse etiquetas (tags) de HTML o ciertas palabras reservadas precedidas por el carácter “@”. Estas etiquetas se escriben al principio de cada clase, miembro o método, dependiendo de qué objeto se desee describir, mediante un comentario iniciado con “/**” y acabado con “*/”.


Netbeans ayuda en la generación de Javadocs de modo automático.


Las etiquetas en Javadocs son:

Tag Descripción
@author Nombre del desarrollador.
@deprecated Indica que el método o clase es antigua y que no se recomienda su uso porque posiblemente desaparecerá en versiones posteriores.
@param Definición de un parámetro de un método, es requerido para todos los parámetros del método.
@return Informa de lo que devuelve el método, no se puede usar en constructores o métodos “void”.
@see Asocia con otro método o clase.
@throws Excepción lanzada por el método
@version Versión del método o clase.



Un ejemplo de un Javadoc de un método:

/**
  * Inserta un título en la clase descripción.
  * Al ser el título obligatorio, si es nulo o vacío se lanzará
  * una excepción.
  *
  * @param titulo El nuevo título de la descripción.
  * @throws IllegalArgumentException Si titulo es null, está vacío o contiene sólo espacios.
  */
 public void setTitulo (String titulo) throws IllegalArgumentException
 {
   if (titulo == null || titulo.trim().equals(""))
   {
       throw new IllegalArgumentException("El título no puede ser nulo o vacío");
   }
   else
   {
       this.titulo = titulo;
   }
 }