Overriding Getters Setters vs. Public Fields

Why do we keep instance variables private? We don’t want other classes to depend on them. Moreover it gives the flexibility to change a variable’s type or implementation on a whim or an impulse. Why, then programmers automatically add or override Getters and Setters to their objects, exposing their private variables as if they were public?

Accessor methods

Accessors (also known as Getters and Setters) are methods that let you read and write the value of an instance variable of an object.

public class AccessorExample {
    private String attribute;

    public String getAttribute() {
        return attribute;
    }

    public void setAttribute(String attribute) {
        this.attribute = attribute;
    }
}

Why Accessors?

There are actually many good reasons to consider using accessors rather than directly exposing fields of a class

Getter and Setter make API more stable. Consider a field public in a class which is accessed by other classes. Later on, we want to add any extra logic while getting and setting the variable. This will impact the existing client that uses the API. Any changes to this public field will require changes to each class that refers it. On the contrary, with accessor methods, one can easily add some logic like cache some data or lazily initialization etc.

Accessor method also allows us to fire a property changed event if the new value is different from the previous value.

Another advantage of using setters to set values is that we can use the method to preserve an invariant or perform some special processing when setting values.

All this will be seamless to the class that gets a value using the accessor method.

Should I have Accessor Methods for all my fields?

Fields can be declared public for package-private or private nested class. Exposing fields in these classes produces less visual clutter compare to accessor-method approach, both in the class definition and in the client code that uses it.

If a class is package-private or is a private nested class, there is nothing inherently wrong with exposing its data fields—assuming they do an adequate job of describing the abstraction provided by the class.

Such code is restricted to the package where the class is declared, while the client code is tied to class internal representation. We can change it without modifying any code outside that package. Moreover, in the case of a private nested class, the scope of the change is further restricted to the enclosing class.

Another example of a design that uses public fields is JavaSpace entry objects. Ken Arnold described the process they went through to decide to make those fields public instead of private with gets and sets methods here

Now this sometimes makes people uncomfortable because they've been told not to have public fields; that public fields are bad. And often, people interpret those things religiously. But we're not a very religious bunch. Rules have reasons. And the reason for the private data rule doesn't apply in this particular case. It is a rare exception to the rule. I also tell people not to put public fields in their objects, but exceptions exist. This is an exception to the rule because it is simpler and safer to just say it is a field. We sat back and asked: Why is the rule thus? Does it apply? In this case, it doesn't.

Private fields + Public accessors == encapsulation

Consider the example below

public class A {
    public int a;
}

Generally, this is considered bad coding practice as it violates encapsulation. The alternate approach is

public class A {
   private int a;

   public void setA(int a) {
      this.a =a;
   }

   public int getA() {
      return this.a;
   }
}

It is argued that this will encapsulate the attribute. Now is this really an encapsulation?

The Fact is, Getters and Setters have nothing to do with encapsulation. Here the data isn't more hidden or encapsulated than it was in a public field. Other objects still have intimate knowledge of the internals of the class. Changes made to the class might ripple out and enforce changes in the dependent classes. Getter and setter in this way are generally breaking encapsulation. A truly well-encapsulated class has no Setters and preferably no Getters either. Rather than asking a class for some data and then compute something with it, the class should be responsible for computing something with its data and then return the result.

Consider an example below,

public class Screens {
    private Map screens = new HashMap();

    public Map getScreens() {
        return screens;
    }

    public void setScreens(Map screens) {
        this.screens = screens;
    }
    // remaining code here
}

If we need to get a particular screen, we do code like below,

Screen s = (Screen)screens.get(screenId);

There are things worth noticing here....

The client needs to get an Object from the Map and casting it to the right type. Moreover, the worst is that any client of the Map has the power to clear it which may not be the case we usually want.

Alternative implementation of the same logic is:

public class Screens {
    private Map screens = new HashMap();

    public Screen getById(String id) {
        return (Screen) screens.get(id);
    }
// remaining code here
}

Here the Map instance and the interface at the boundary (Map) are hidden.

Getters and Setters are highly Overused

Creating private fields and then using the IDE to automatically generate Getters and Setters for all these fields is almost as bad as using public fields.

One reason for the overuse is that in an IDE it’s just now a matter of few clicks to create these accessors. The completely meaningless Getters and Setters code is at times longer than the real logic in a class and you will read these functions many times even if you don't want to.

All fields should be kept private, but with Setters only when they make sense which makes object Immutable. Adding an unnecessary Getters reveals internal structure, which is an opportunity for increased coupling. To avoid this, every time before adding the Accessors, we should analyse if we can encapsulate the behaviour instead.

Let’s take another example,

public class Money {
    private double amount;

    public double getAmount() {
        return amount;
    }

    public void setAmount(double amount) {
        this.amount = amount;
    }

    //client
    Money pocketMoney = new Money();
    pocketMoney.setAmount(15d);
    double amount = pocketMoney.getAmount();  // we know its double
    pocketMoney.setAmount(amount + 10d);
}

With the above logic, later on, if we assume that double is not a right type to use and should use BigDecimal instead, then the existing client that uses this class also breaks.

Let’s restructure the above example,

public class Money {
private BigDecimal amount;

    public Money(String amount) {
    this.amount = new BigDecimal(amount);
}

    public void add(Money toAdd) {
        amount = amount.add(toAdd.amount);
    }

    // client
    Money balance1 = new Money("10.0");
    Money balance2 = new Money("6.0");
    balance1.add(balance2);

}

Now instead of asking for a value, the class has the responsibility to increase its own value. With this approach, the change request for any other datatype in future requires no change in the client code. Here not only the data is encapsulated but also the data which is stored or even the fact that it exists at all.

Conclusions

Use of Accessors to restrict direct access to field variable is preferred over the use of public fields, however, making Getters and Setters for each and every field is overkill. It also depends on the situation, though, sometimes you just want a dumb data object. Accessors should be added to the field where they're really required. A class should expose larger behaviour which happens to use its state, rather than a repository of state to be manipulated by other classes.

More Reading

https://c2.com/cgi/wiki?TellDontAsk

https://c2.com/cgi/wiki?AccessorsAreEvil

Effective Java

One of the features in Java 7 is Automatic Resource Management. The idea was presented by Josh Bloch. The IO resources in Java need to close manually like FileInputStream, java.io.InputStream, OutputStream, Reader, Writer etc. The idea with this proposal is that it should not be the responsibility of the developer for disposing out these resources much like automatic garbage collection concept.

We usually write this kind of code for IO resources as below.

FileInputStream input = null;            
try {              
    input = new FileInputStream(configResource);               
    wfConfiguration = wfConfiguration.parseInputStream(input);           
} catch (IOException ioe) {               
      throw new RuntimeException(ioe);           
} finally {               
      if (input != null) {                   
           try {                       
             input.close();                   
           } catch (IOException ioe2) {                       
             throw new RuntimeException(ioe2);                   
           }               
      }           
}

The same code can be written in Java 7 as
FileInputStream input = null;            
try (input = new FileInputStream(configResource)) {         
      wfConfiguration = wfConfiguration.parseInputStream(input);     
} catch (IOException ioe) {               
      throw new RuntimeException(ioe);       
}

Hope you like the new syntax.

Project Coin: Updated ARM Spec

Does Java pass by reference or pass by value?

In Java everything is passed by value. Sometime this is confusing but the point to understand is that in Java, when we pass an parameter to the method, a copy of the parameter is made.

If the argument is the primitive type, then a copy of the value of the primitive is passed. If the argument being passed is an object reference, then a copy of the value of the reference is passed. i.e., the object reference is passed by value.

Consider the example below,

public static void main(String[] args) {
   A a = new A();
   A b = new A();
   a.attribute = 5;
   b.attribute = 7;
   System.out.println(a.attribute);
   System.out.println(b.attribute);
   changeAttribute(a,b);
   System.out.println(a.attribute);
   System.out.println(b.attribute);
}

public static void changeAttribute(A a, A b) {
   a = b;
   System.out.println(a.attribute);
}   

Since Java is pass by value, the caller still does not see the change, even after calling the method.

In short, Java is pass by value for all data types. Non-primitive type variables are references to objects. They are not the objects themselves. Because variables are not the objects themselves, the objects never passed to a method; only a reference to the object is passed.

More Readings

Java Bugs with Static Analysis Tool findbugs Help Improve Software Code Quality

This post is regarding the static analysis tool that finds real bugs in Java programs that help to improve software code quality. Static analysis tools can find real defects and issues in the code. We can effectively incorporate static analysis into our software development process.

FindBugs - Static analysis Tool

FindBugs is an open source static analysis tool that analyzes Java class files, looking for programming defects. The analysis engine reports nearly 300 different bug patterns. Each bug pattern is grouped into a category example,

  • correctness
  • bad practice
  • performance
  • internationalization
and each report of a bug pattern is assigned a
  • priority
  • high
  • medium
  • low

Let’s start with some of the selected bug categories with some examples.

Correctness

Comparing incompatable types for equality

Consider the following code,

if ((!value.equals(null)) && (!value.equals(""))) {
    Map spaces = (Map) vm.get(SpaceConstants.AVAILABLESPACEMAP);
}

One would expect that the condition would true, when value is not null and is not empty. However, value.equals(null) according to the contract of the equals() method, would always return false.

Consider the another similar example,

if ((bean.getNoteRate() != null) && 
    !bean.getNoteRate().equals("") && 
    (bean.getNoteRate() > 0)) {
          item.setNoteRate(bean.getNoteRate());
}

We might expect that the condition would true, when noteRate is not null, not empty and is greater than 0. However, the condition would never be true.

The reason is that bean.getNoteRate().equals("") would always return false regardless of being equal value.

According to the contract of equals(), objects of different classes should always compare as unequal; therefore, according to the contract defined by java.lang.Object.equals(Object), the result of this comparison will always be false at runtime.

Null pointer dereference

Consider the following code,

if ((list == null) && (list.size() == 0)) {
     return null;
}

This will lead to a Null Pointer Exception when the code is executed while list is null.

Suspicious reference comparison

Consider the following code,

if (bean.getPaymentAmount() != null && 
    bean.getPaymentAmount() != currBean.getPrincipalPaid()) { 
        // code to execute
}

This code compares two reference values (Double paymentAmount) using the != operator, where the correct way to compare instances of this type is generally with the equals() method. It is possible to create distinct instances that are equal but do not compare as == since they are different objects.

Doomed test for equality to NaN

Consider the following code,

if ((newValue == Double.NaN) || (newValue < 0d)) {
    // the code to execute
}

This code checks to see if a floating point value is equal to the special Not A Number value. However, because of the special semantics of NaN, no value is equal to Nan, including NaN. Thus, x == Double.NaN always evaluates to false. To check if a value contained in 'x' is the special Not A Number value, use Double.isNaN(x) (or Float.isNaN(x) if x is floating point precision).

Also see How can you compare NaN values? .

Method whose return value should not ignore

string is immutable object. So ignoring the return value of the method would consider as bug.

String name = "Muhammad";
name.toUpper();
if (name.equals("MUHAMMAD"))

Performance

Method invokes inefficient Boolean constructor;use Boolean.valueOf(...) instead

Consider the following code,

if ((record.getAmount() != null) && 
    !record.getAmount().equals(new Boolean(bean.isCapitalizing()))) { 
           // code to execute
}

Creating new instances of java.lang.Boolean wastes memory, since Boolean objects are immutable and there are only two useful values of this type. Use the Boolean.valueOf() method (or Java 1.5 autoboxing) to create Boolean objects instead.

Inefficient use of keySet iterator instead of entrySet iterator

Consider the following code,

Iterator iter = balances.keySet().iterator();     
while (iter.hasNext()) { 
    // code to execute
}

This method accesses the value of a Map entry, using a key that was retrieved from a keySet iterator. It is more efficient to use an iterator on the entrySet of the map, to avoid the Map.get(key) lookup.

Method invokes inefficient Number constructor; use static valueOf instead

Consider the following code,

Integer number1 = new Integer(123);
Integer number2 = Integer.valueOf(123);
System.out.println("number1 =  " + number1);
System.out.println("number2 =  " + number2);

Using new Integer(int) is guaranteed to always result in a new object whereas Integer.valueOf(int) allows caching of values to be done by the class library, or JVM. Using of cached values avoids object allocation and the code will be faster.

Also see: Integer Auto Boxing

Method concatenates strings using + in a loop

for (int x = 0; x < exceptions.size(); x++) {
    errorMessage += getStackTrace(
         exceptions.get(x) + "\n");
}

In each iteration, the String is converted to a StringBuffer/StringBuilder, appended to, and converted back to a String. This can lead to a cost quadratic in the number of iterations, as the growing string is recopied in each iteration.

Better performance can be obtained by using a StringBuilderexplicitly.

Dodgy

Code that is confusing, anomalous, or written in a way that leads itself to errors. Examples include dead local stores, switch fall through, unconfirmed casts, and redundant null check of value known to be null.

instanceof will always return true

The instanceof test will always return true (unless the value being tested is null)

NodeList nodeList = root.getElementsByTagName("node");
int nodeListLength = nodeList.getLength();
for (int i = 0; i < nodeListLength; i++) {
   Node node = nodeList.item(i);
   if (node instanceof Node &&  
       node.getParentNode() == root) {
           //do code
   }
}

Test for floating point equality

Consider the following code,

private double value = 0d;
if (value > diff) {
     // code to excute
} else if (value == diff) {     
    // code to excute
}

The above code compares two floating point values for equality. Because floating point calculations may involve rounding, calculated float and double values may not be accurate. For values that must be precise, such as monetary values, BigDecimal would be more appropriate.

See Floating-Point Operations

Also see Effective Java 2nd Ed, Item 48:Avoid float and double if exact answers are required

Integral division result cast to double or float

Consider the code,

int x = 2;
int y = 5;
// Wrong: yields result 0.0
double value1 =  x / y;

This code casts the result of an integral division operation to double or float. Doing division on integers truncates the result to the integer value closest to zero. The fact that the result was cast to double suggests that this precision should have been retained. We should cast one or both of the operands to double before performing the division like

// Right: yields result 0.4
double value2 =  x / (double) y;

References:

Memory management is done automatically in Java. The programmer doesn't need to worry about reference objects that have been released. One downside to this approach is that the programmer cannot know when a particular object will be collected. Moreover, the programmer has no control over memory management. However, the java.lang.ref package defines classes that provide a limited degree of interaction with the garbage collector. The concrete classes SoftReference, WeakReference and PhantomReference are subclasses of Reference that interact with the garbage collector in different ways. In this article we will discuss the functionality and behavior of the PhantomReference classes and see how it can be used.

Problem with Finalization

To perform some postmortem cleanup on objects that garbage collector consider as unreachable, one can use finalization. This feature can be utilized to reclaim native resources associated with an object. However, finalizers have many problems associated.

Firstly, we can’t foresee the call of finalize(). Since the Garbage Collection is unpredictable, the calling of finalize() cannot be predicted. There is no guarantee that the object will be garbage collected. The object might never become eligible for GC because it could be reachable through the entire lifetime of the JVM. It is also possible that no garbage collection actually runs from the time the object became eligible and before JVM stops.

Secondly, Finalization can slowdown an application. Managing objects with a finalize() method takes more resources from the JVM than normal objects.

As per doc,

You should also use finalization only when it is absolutely necessary. Finalization is a nondeterministic -- and sometimes unpredictable -- process. The less you rely on it, the smaller the impact it will have on the JVM and your application


In Effective Java, 2nd ed., Joshua Bloch says,

there is a severe performance penalty for using finalizers... So what should you do instead of writing a finalizer for a class whose objects encapsulate resources that require termination, such as files or threads? Just provide an explicit termination method, and require clients of the class to invoke this method on each instance when it is no longer needed.


In short, Finalize() isn't used often, and also there is no much reason to use it. If we have a class with methods like close() or cleanup() and that should be called once user done with the object then placing these methods call in finalize() can be used as a safety measure, but not necessary.

Phantom Reference
phantom reachable, phantomly reachable


An object is phantom reachable if it is neither strongly nor softly nor weakly reachable and has been finalized and there is a path from the roots to it that contains at least one phantom reference.

The PhantomReference constructor accepts two arguments:

referent - the object the new phantom reference will refer to
q - the reference is registered with the given queue.

The argument q represents the instance of the ReferenceQueue class. If the garbage collector determines that the referent of a phantom reference is phantom reachable, then the PhantomReference will be added to this ReferenceQueue. You can then retrieve the PhantomReference by using the remove() methods of the ReferenceQueue class.

Consider the following example,

ReferenceQueue q = new ReferenceQueue();
PhantomReference pr = new PhantomReference(object, referenceQueue);

// Later on another point
Reference r = q.remove();

// Now, clear up any thing you want

PhantomReference, when to use?
Phantom Reference can be used in situations, where sometime using finalize() is not sensible thing to do.This reference type differs from the other types defined in java.lang.ref Package because it isn't meant to be used to access the object, but as a signal that the object has already been finalized, and the garbage collector is ready to reclaim its memory.

As per API doc,

Phantom reference objects, which are enqueued after the collector determines that their referents may otherwise be reclaimed. Phantom references are most often used for scheduling pre-mortem cleanup actions in a more flexible way than is possible with the Java finalization mechanism.


People usually attempt to use finalize() method to perform postmortem cleanup on objects which usually not advisable. As mentioned earlier, Finalizers have an impact on the performance of the garbage collector since Objects with finalizers are slow to garbage collect.

Phantom references are safe way to know an object has been removed from memory. For instance, consider an application that deals with large images. Suppose that we want to load a big image in to memory when large image is already in memory which is ready for garbage collected. In such case, we want to wait until the old image is collected before loading a new one. Here, the phantom reference is flexible and safely option to choose. The reference of the old image will be enqueued in the ReferenceQueue once the old image object is finalized. After receiving that reference, we can load the new image in to memory.

Similarly we can use Phantom References to implement a Connection Pool. We can easily gain control over the number of open connections, and can block until one becomes available.

Reference Objects and Garbage Collection

Soft Reference can be garbage collected after there are no strong references to the referent. However, it typically retained until memory is low. All softly reachable objects will be reclaimed before an OutOfMemoryException is thrown. Therefore, it can be used to implement caches of objects that can be recreated if needed.

Weak Reference can be garbage collected when there are no strong or soft references to the referent. However, unlike Soft Reference, they are garbage collected on a gc even when memory is abundant. They often can be used for “canonical mappings” where each object has a unique identifier (one-to-one), and in collections of “listeners”

On the other hand, Phantom Reference, can be garbage collected once there are no strong, soft or weak references to the referent. When object is phantomly reachable, it means the object is already finalized but not yet reclaimed, so the GC enqueues it in a ReferenceQueue for post-finalization processing.

As per Java Doc,

Unlike soft and weak references, phantom references are not automatically cleared by the garbage collector as they are enqueued. An object that is reachable via phantom references will remain so until all such references are cleared or themselves become unreachable.


A PhantomReference is not automatically cleared when it is enqueued, so when we remove a PhantomReference from a ReferenceQueue, we must call its clear() method or allow the PhantomReference object itself to be garbage-collected.

Summary

In short, we should avoid finalize() as much as possible. There is no guarantee if the finalize() method will be called promptly following garbage collection, or even it will be called. If the finalize method runs for a long time, it can delay execution of finalize methods of other objects. Instead of relying on finalize(), we can use reference types define in java.lang.ref package.

Beside java.lang.ref package, Google collection library also provide some alternatives. For example, FinalizablePhantomReference extends java.lang.ref.PhantomReference, deals with processing the ReferenceQueue and call back a convenient method finalizeReferent(). So if we want to do some cleanup operation when an object is claimed by the garbage collector (GC) then we just need to override the finalizeReferent() method.

Resources

Garbage Collection
PhantomReference
The java.lang.ref Package (Java in a Nutshell)
Understanding Weak References

If the iterations of the loop are independent and we don’t need to wait for all tasks to complete before proceeding, we can use an Executor to transform a sequential loop into a parallel tasks execution.

Transform sequential loop into a parallel using ExecutorService and CompletionService

First consider an example which process all task using sequential loop.

Public void SequentialLoop(List elements) {
  for (Element e : elements)
      process(e);
}

Let's modify our code and use an ExecutorService to execute task in parallel.
public void processLoopInParallel(Executor exec, List elements) {
  for (final Element e : elements)
      exec.execute(new Runnable() {
          public void run() { process(e); }
      });
}

The second loop will return more quickly, since all the task are queued to the Executor, rather than waiting for each tasks to complete.

If you want to submit a set of tasks and you want to retrieve the results as they become available, we can use CompletionService

void solve(Executor e, Collection<callable<result>> solvers)
     throws InterruptedException, ExecutionException {
       CompletionService<result> ecs
           = new ExecutorCompletionService<result>(e);
       for (Callable<result> s : solvers)
           ecs.submit(s);
       //retrieve completed task results and use them  
       int n = solvers.size();
       for (int i = 0; i < n; ++i) {
           Result r = ecs.take().get();
           if (r != null)
               use(r);
       }
   }

The advantage of using CompletionService is that it always returns the first completed task result. In this way, we can ensure that we are not waiting for tasks to complete while uncompleted tasks are running in the background.

If you want to submit a set of tasks and wait for them all to complete, you can use ExecutorService.invokeAll

This small post describes the process for upgrading xalan-java version to 2.7.1 to work compatible with Java 1.6.

Xalan-Java is an XSLT processor for transforming XML documents into HTML, text, or other XML document types.

The issues that needs to fix while upgrade XSL to work with Java JDK 1.6 and Xalan 2.7.1 will be discussed here and in this post we will look in to sample examples that help us to create XSL’s to work compatible with these new version.

Please note that the problems identified here work well with Java JDK 1.5. Therefore, these changes require only, if we want our XSL to parse successfully with Java JDK 1.6.

xsl:import

This element must appear as the first child node of xsl:stylesheet or xsl:transform. It should not appear in the end or middle of the stylesheet. Otherwise it throws exception

“Error! xsl:import is not allowed in this position in the stylesheet!”

xsl:template

xsl:template must either have name or match attribute. If the name attribute is omitted then there must be a match attribute. For example, the code

<xsl:template >
    <xsl:text>
    -- </xsl:text>
</xsl:template>
throw exceptions

“Fatal Error! java.lang.RuntimeException: ElemTemplateElement error: xsl:template requires either a name or a match attribute.”



The below code will execute successfully.
<xsl:template name="main">
    <xsl:text>
    -- </xsl:text>
</xsl:template>

xsl:value-of

xsl:value-of cannot be enclosed under xsl:text.

For example, the code
<xsl:text>
    <xsl:value-of select="@name"/>
</xsl:text>

will throw exception

“Error! xsl:value-of is not allowed in this position in the stylesheet!”

The code can directly used to extract the value to output stream.

Use StringBuffer/StringBuilder in XSL

XSL Code such as
<xsl:variable name="allTableNames" select="java:java.lang.StringBuffer.new()" />
<xsl:variable name="void0" select="java:append($allTableNames, concat('', $toAppend))" />

throw exception

java.lang.IllegalArgumentException: argument type mismatch

and java.lang.NullPointerException

The reason is that the org.apache.xalan.extensions.MethodResolver picks a method whose argument types are not converted properly. This was noticed in the new versions of JVM due to the order at which they return all methods available for a given class and the issue is still not resolved yet.
Further information can be refer to

https://issues.apache.org/jira/browse/XALANJ-2374
https://issues.apache.org/jira/browse/XALANJ-2315

I have replaced StringBuilder with Map to correctly transform the XSL.

XSL DTMNodeIterator

Consider the following XSL.
<xsl:variable name="temp_core_alias_value">
    <xsl:choose>
       <xsl:when test="'true'">
          <xsl:text>, CASE WHEN </xsl:text><xsl:text>.
          </xsl:text>
          <xsl:text> IS NULL THEN </xsl:text>
        </xsl:when>
        <xsl:otherwise>
            <xsl:text>,</xsl:text>
            <xsl:text>.</xsl:text>
        </xsl:otherwise>
    </xsl:choose>
</xsl:variable>

<xsl:variable name="temp_core_alias" select="java:put($processedMap,
   $temp_core_alias_value, 'Test Value')"/>
   <xsl:variable name="hasValue"
      select="java:containsKey($processedMap, $temp_core_alias_value)"/>
      <xsl:if test="$hasValue = 'true'">
         <xsl:text>Value exist in Map = </xsl:text>
         <xsl:value-of select="java:get($processedMap, $temp_core_alias_value)"/>
      </xsl:if>
   </xsl:variable>
</xsl:variable>
At this point, we might expect java:containsKey($processedMap,$temp_core_alias_value) to return true and the output of the above as “Value exist in Map = Test Value” But when transform using Xalan-Java 2.7.1, the Map return false and therefore no output appear.

The reason is that the evaluating expression for the temp_core_alias_value variable would return
org.apache.xml.dtm.ref.DTMNodeIterator@e020c9 and therefore, the contains method of the Map would always return false regardless of the key present.

We can transform DTMNodeIterator to normal String form by concatenate an empty string with temp_core_alias_value variable like the following
<xsl:variable name="temp_core_alias_value" select="concat('', $temp_core_alias_value)" />

Adding this line right before put in to the Map would result in the normal expected output.

How remote debugging an java application server using eclipse

Start java application and tell the JVM that it will be debugged remotely
For this, add the following options to the JVM arguments for remote debugging:

java -Xdebug -Xrunjdwp:transport=dt_socket,address=8998,server=y,suspend=n


And then just start IDE's remote debugger to listening on port 8998

transport=dt_socket tells that the debugger connections will be made through a socket while the address=8998 parameter informs it that the port number will be 8998. For suspend=y , the JVM starts in suspended mode and stays suspended until a debugger is attached to it.

Configuring Eclipse to Debug a Remotely Running Application

  • Start Eclipse
  • Navigate to Run -> Debug Configurations
  • Create a new Remote Java Application configuration
  • Configure the remote application's details
  • click Apply
See Also
JPDA
Debugging J2EE Applications
Debugging with the Eclipse Platform

Some key points include

  • Ant 1.8 has improved directory scanning performance and better symbolic link cycle handling
  • Brings enhancements and bug fixes to many tasks and types (a strong point for Ant) as well as some core changes.
  • With more than 275 fixed Bugzilla issues, Ant 1.8 flaunts some new performance improvements. A large directory scan, which would have taken 14 minutes in Ant 1.7.1, now takes as little as 22 seconds with Ant 1.8.
  • Ant 1.8 includes a handful of new elements including , a new top-level element that assists in writing reusable build files that are meant to be imported.Its name and dependency-list are similar to and it can be used like a from the command line or a dependency-list but in addition, the importing build file can add targets to the 's depends list.

Other additions include:
  • New lexically scoped local properties.
  • An enhanced that can import from any file or URL resource.
  • An easier mechanism for extending Ant's property expansion.
  • A new task called include that provides a preferred alternative to when you don't want to override any targets.
  • Rewritten if and unless attributes that do what is expected when applied to a property expansion (i.e. if="${foo}" means "yes, do it" if ${foo} expands to true. In Ant 1.7.1 it would mean "no" unless a property named "true" existed). This adds "testing conditions" to property expansion as a new use-case.

Ant 1.8 now requires at least Java 1.4 or later.

Other References

https://ant.apache.org/bindownload.cgi

Release notes

https://dzone.com/articles/ant-18-scanning-leaves-171

Why Overriding hashCode() and Equal() method contract?

Every Java object has two very important methods i.e. hashCode() and equals() method. These methods are designed to be overridden according to their specific general contract. This article describes why and how to override the hashCode() method that preserves the contract of HashCode.

Contract For HashCode Method

The contract for hashCode says
“If two objects are equal, then calling hashCode() on both objects must return the same value”.
Now the question that will come into your mind is that; is it necessary that the above statement should always be true?

Consider the fact that we have provided a correct implementation of an equal method for our class, then what would happen if we do not obey the above contract.
To answer the above question, let us consider the two situations,
  1. Objects that are equal but return different hashCodes
  2. Objects that are not equal but return the same hashCode
Objects that are equal but return different hashCodes
What would happen if the two objects are equal but return different hashCodes? Your code would run perfectly fine. You will never come in trouble unless and until you have not stored your object in a collection like HashSet or HashMap. But when you do that, you might get strange problems at runtime.
To understand this better, you have to first understand how collection classes such as HashMap and HashSet work. These collections classes depend on the fact that the objects that you put as a key in them must obey the above contract. You will get strange and unpredictable results at runtime if you do not obey the contract and try to store them in a collection.

Consider an example of HashMap. When you store the values in HashMap, the values are actually stored in a set of buckets. Each of those buckets has been assigned a number which is use to identify it. When you put a value in the HashMap, it stores the data in one of those buckets. Which bucket is used depends on the hashCode that will return by your object. Let’s say, if the hashCode() method returns 49 for an object, then it gets stored in the bucket 49 of the HashMap.
Later when you try to check whether that collection contains an element or not by invoking the Contains(element) method, the HashMap first gets the hashCode of that “element “. Afterwards, it will look into the bucket that corresponds with the hashCode. If the bucket is empty, then it means we are done and it's return false which means the HashMap does not contain the element.
If there are one or more objects in the bucket, then it will compare “element” with all other elements in that bucket using your defined equal() function.

Objects that are not equal but return the same hashCode
The hashCode contract does not say anything about the above statement. Therefore different objects might return the same hashCode value, but collections like HashMap will work inefficiently if different objects return the same hashCode value.

Why Buckets

The reason why bucket mechanism is used is its efficiency. You can imagine that if all the objects you put in the HashMap would be stored into one big list, then you have to compare your input with all the objects in the list when you want to check if a particular element is in the Map. With the use of buckets, you will now compare only the elements of the specific bucket and any bucket usually holds only a small portion of all the elements in the HashMap.

Overriding hashCode Method

Writing a good hashCode() method is always a tricky task for a new class.

Return Fixed Value
You can implement your hashCode() method that always returns fix value, for example like this:

//bad performance
@Override
public int hashCode() {
    return 1;
}
The above method satisfies all the requirements and is considered legal according to the hash code contract but it would not be very efficient. If this method is used, all objects will be stored in the same bucket i.e. bucket 1 and when you try to ensure whether the specific object is present in the collection, then it will always have to check the entire content of the collection.

On the other hand, if you override the hashCode() method for your class and if the method breaks the contract then calling contains() method may return false for the element which is present in the Collection but in a different bucket.

Method From Effective Java
Joshua Bloch in Effective Java provides good guidelines for generating a hashCode() value
  • 1. Store some constant nonzero value; say 17, in an int variable called result.
  • 2. For each significant field f in your object (each field taken into account by the equals()), do the following
  • a. Compute an int hashCode c for the field:
  • i. If the field is a boolean, compute c = (f ? 1 : 0).
  • ii. If the field is a byte, char, short, or int, compute c = (int) f.
  • iii. If the field is a long, compute c = (int) (f ^ (f >>> 32)).
  • iv. If the field is a float, compute c = Float.floatToIntBits(f).
  • v. If the field is a double,compute long l = Double.doubleToLongBits(f),
    c = (int)(l ^ (l >>> 32))
  • vi. If the field is an object reference then equals( ) calls equals( ) for this field. compute
    c = f.hashCode()
  • vii. If the field is an array, treat it as if each element were a separate field.
    That is, compute a hashCode for each significant element by applying above rules to each
    element
  • b. Combine the hashCode c computed in step 2.a into result as follows:result = 37 * result + c;
  • 3. Return result.
  • 4. Look at the resulting hashCode() and make sure that equal instances have equal hash codes.
Here is an example of a class that follows the above guidelines

public class HashTest {
    private String field1;
    private short  field2;
    ----

    @Override
    public int hashCode() {
        int result = 17;
        result = 37*result + field1.hashCode();
        result = 37*result + (int)field2;
        return result;
    }
}
You can see that a constant 37 is chosen. The purpose of choosing a prime number is that it is a prime number. We can choose any other prime number. Using prime number the objects will be distributed better over the buckets. I encourage the user to explore the topic further by checking out other resources.

Using java.util.Objects.hash
java.util.Objects class contains a utility method hash(Object... values) that can be used to calculate hash for sequence of objects. With this method, we can implement hashcode for our example HashTest class as follows:
public class HashTest {
    private String field1;
    private short  field2;
    ----

    @Override
    public int hashCode() {
        return java.util.Objects.hash(field1, field2);
    }
}
Apache HashCodeBuilder
Writing a good hashCode() method is not always easy. Since it can be difficult to implement hashCode() correctly, it would be helpful if we have some reusable implementations of these.

The Jakarta-Commons org.apache.commons.lang.builder package is providing a class named HashCodeBuilder which is designed to help implement a hashCode() method. Usually, developers struggle hard with implementing a hashCode() method and this class aims to simplify the process.
Here is how you would implement hashCode algorithm for our above class

public class HashTest {
    private String field1;
    private short  field2;
    ----

    @Override
    public int hashCode() {
        return new HashCodeBuilder(83, 7)
            .append(field1)
            .append(field2)
            .toHashCode();
    }
}
Note that the two numbers for the constructor are simply two different, non-zero, odd numbers - these numbers help to avoid collisions in the hashCode value across objects.

If required, the superclass hashCode() can be added using appendSuper(int).
You can see how easy it is to override HashCode() using Apache HashCodeBuilder.

Mutable Object As Collection Key

It is a general advice that you should use immutable object as a key in a Collection. HashCode work best when calculated from immutable data. If you use Mutable object as key and change the state of the object so that the hashCode changes, then the store object will be in the wrong bucket in the Collection

The most important thing you should consider while implementing hashCode() is that regardless of when this method is called, it should produce the same value for a particular object every time when it is called. If you have a scenario like an object produces one hashCode() value when it is put() into a HaspMap and produces another value during a get(), in that case, you would not be able to retrieve that object. Therefore, if you hashCode() depends on mutable data in the object, then made changing those data will surely produce a different key by generating a different hashCode().
Look at the example below
public class Employee {

    private String name;
    private int age;

    public Employee() {
    }

    public Employee(String name, int age) {
        this.name = name;
        this.age = age;
    }

    public String getName() {
       return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }

    @Override
    public boolean equals(Object obj) {
        //Remember: Some Java gurus recommend you avoid using instanceof
        if (obj instanceof Employee) {
            Employee emp = (Employee)obj;
            return (emp.name == name && emp.age == age);
        }
        return false;
    }

    @Override
    public int hashCode() {
        return name.length() + age;
    }

    public static void main(String[] args) {
        Employee e = new Employee("muhammad", 24);
        Map<Object,Object> m = new HashMap<Object,Object>(); 
        m.put(e, "Muhammad Ali Khojaye");  
    
        // getting output 
        System.out.println(m.get(e));
        e.name = "abid";  
    
        // it fails to get System.out.println(m.get(e));
        e.name = "amirrana";  
    
        // it fails again
        System.out.println(m.get(new Employee("muhammad", 24))); 
    } 
So we can see in the above examples that how are we getting some unpredictable results after modifying the object state.

Another Example of Mutable Field as Key

Let consider an another example below:
public class HashTest {
    private int mutableField;
    private final int immutableField;

    public HashTest(int mutableField, int immutableField) {
        this.mutableField = mutableField;
        this.immutableField = immutableField;
    }

    public void setMutableField(int mutableField) {
        this.mutableField = mutableField;
    }

    @Override
    public boolean equals(Object o) {
        if(o instanceof HashTest) {
            return (mutableField == ((HashTest)o).mutableField)
               && (immutableField ==  ((HashTest)o).immutableField);
        }else {
            return false;
        }              
    }

    @Override
    public int hashCode() {
        int result = 17;
        result = 37 * result + this.mutableField;
        result = 37 * result + this.immutableField;
        return result;
    }

    public static void main(String[] args) {
        Set<HashTest> set = new HashSet<HashTest>();
        HashTest obj = new HashTest(6622458, 626304);
        set.add(obj);                 
        System.out.println(set.contains(obj));     
        obj.setMutableField(3867602);        
        System.out.println(set.contains(obj));
    }
}
After changing mutableField, the computed hashCode value is no longer pointing to the old bucket and the contains() returns false.
We can tackle such situation using either of these methods
  • Hashcode is best when calculated from immutable data; therefore ensure that only immutable object would be used as key with Collections.
  • If you need mutable fields included in the hashCode method then you need to ensure that object state is not changing after they've been used as Key in a hash-based collection. If for any reason it changed, you can calculate and store the hash value when the object updates mutable field. To do this, you must first remove it from the collection(set/map) and then add it back to the collection after updating it.

Memory leaks with HashCode and Equal

It is possible that a memory leak can occur in the Java application if equals() and hashcode() are not implemented. Consider a small code example below in which HashMap keeping references active if equals() and hashcode() are not implemented. As a results the HashMap grows continuously by adding the same key repeatedly and finally throw an OutOfMemoryError
/**
 * Example demonstrating a Hashcode leak.
 */
public class HashcodeLeakExample {
    private String id;

    public HashcodeLeakExample(String id) {
        this.id = id;
    }

    public static void main(String args[]) {
        try {
            Map<HashcodeLeakExample, String> map = 
                  new HashMap<HashcodeLeakExample, String>();
            while (true) {
                map.put(new HashcodeLeakExample("id"), "any value");
            }
        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }
}

References and More Information

In earlier versions of Java, Marker Interfaces were the only way to declare metadata about a class. With the advent of annotation in Java 5, it is considered that marker interfaces have now no place. They can be completely replaced by Annotations, which allow for a very flexible metadata capability. Everything that can be done with marker interfaces can be done instead with annotations. In fact, it's now recommended that the marker interface pattern should not be used anymore. Annotations can have parameters of various kinds, and they're much more flexible. We can also see that the examples used in Sun API’s are rather old and that no new ones have been added since after introduce of Annotation. In this post, we will see whether Marker Interfaces can still be used for any reasons.

Purpose of Marker Interfaces in Java

A marker interface is an interface with no method declarations. They just tell the compiler that the objects of the classes implementing the interfaces with no defined methods need to be treated differently. These Marker interfaces are used by other code to test for permission to do something.

marker interfaces list in java

Some well-known examples are

  • java.io.Serializable - object implement it can be serialized using ObjectOutputStream.
  • java.lang.Clonable - objects clone method may be called
  • java.util.RandomAccess- support fast (generally constant time) random access

They are also known as tag interfaces since they tag all the derived classes into a category based on their purpose

Difference between Interface and Marker Interface

Interface in general defines a contract. They represent a public commitment and when implement form a contract between the class and the outside world. On the other hand, an empty interface does not define any members, and as such, does not define a contract that can be implemented.

Normally, when a class implements an interface, it tells something about the instances of the class. It represent an "is a" relationship that exist in inheritance. For example, when a class implements List, then object is a List.

With marker interfaces, this inheritance mechanism usually does not obey. For example, if class implements the marker interface Serializable, then instead of saying that the object is a Serializable, we say that the object has a property i.e. it is Serializable.

Should Avoid Marker Interfaces?

One common problem occurs while using marker interfaces is that when a class implements them, each subclasses inherit them as well. Since you cannot unimplemented an interface in Java, therefore a subclass that does not want to treat differently will still be marked as Marker. For example, Foo implements Serializable, any subclass Bar etc does too as well.

Moreover, there are places in the Sun API’s where such interfaces have been used for messy and varying purposes. Consider Cloneable Marker Interface. If the operation is not supported by an object, it can throw an exception when such operation is attempted, like Collection.remove does when the collection does not support the remove operation (eg, unmodifiable collection) but a class claiming to implement Cloneable and throwing CloneNotSupportedException from the clone() method wouldn't be a very friendly thing to do.

Many developers consider it as broken interface. Ken Arnold and Bill Venners also discussed it in Java Design Issues. As Ken Arnold said,

If I were to be God at this point, and many people are probably glad I am not, I would say deprecate Cloneable and have a Copyable, because Cloneable has problems. Besides the fact that it's misspelled, Cloneable doesn't contain the clone method. That means you can't test if something is an instance of Cloneable, cast it to Cloneable, and invoke clone. You have to use reflection again, which is awful. That is only one problem, but one I'd certainly solve.

Oracle has also reported it as Bug which can refer at https://bugs.openjdk.java.net/browse/JDK-4098033

Are Marker Interfaces end?

Appropriate Use of Marker Interfaces vs Annotation

We usually hear that Marker Interface is now obsolete and it has no place. But, there are situations where they can be handy over using annotations.

Annotations have to be checked at runtime using reflection. Empty interfaces can be checked at compile-time using the type system in the compiler. Compile-time checking can be one of the convincing reason to use such interfaces.

Consider the example below:

@interface
HasTag {
}

@HasTag
public class ClasswithTag {
}

public void performAction(Object obj){
    if (!obj.getClass().isAnnotationPresent(HasTag.class)) {
        throw new IllegalArgumentException("cannot perform action...");
    } else {
        //do stuff as require
    }
}

One problem with this approach is that the check for the custom attribute can occur only at runtime. Sometimes, it is very important that the check for the marker be done at compile-time. Let me refine the above example to use Marker.

interface HasTag {
}

public class ClassWithTag implements HasTag {
}

public void performAction(HasTag ct){
    //do stuff as require
}

Similarly, in the case of the Serializable marker interface, the ObjectOutputStream.write(Object) method would fail at runtime if its argument does not implement the Interface which would not be the case, if ObjectOutputStream had a writeObject(Serializable) method instead.

Marker Interfaces can also be well-integrated in Javadoc where one can promptly see who implements marker interfaces or not by just look it up and see the list of implementing classes.

Moreover, Joshua in Effective Java also recommends using Marker interface to define the type which can result in the very real benefit of compile-time type checking.