To return expected results, you can:
Reduce the number of search terms.
Each term you use focuses the search further.
Check your spelling.
A single misspelled or incorrectly typed term can change your result.
Try substituting synonyms for your original terms.
For example, instead of searching for "java classes", try "java training"
Did you search for an IBM acquired or sold product ?
If so, follow the appropriate link below to find the content you need.
You review the 'as_trace.log' file and see an error like this:
2018-09-15 13:41:46,676 | 10.85.128.45:9080 | ibm | admin | | 165de0893a5-eb27836496a4554f | ERROR | spark.SparkJobRunner | OO-E(10)-P-4-T-6 => Fail to run spark job org.apache.spark.SparkException: Job aborted due to stage failure:
Task 6
in stage 1000.0 failed 4 times, most recent failure: Lost
task 6
.3 in stage 1000.0 (TID 102082, spssas123.mydomain): ExecutorLostFailure (executor 491 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 27.5 GB of 27.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
A rule of thumb to use here is
spark.executor.memory + spark.yarn.executor.memoryOverhead < yarn.nodemanager.resource.memory-mb.
You can decrease spark.yarn.executor.memoryOverhead and spark.executor.memory and/or increase yarn.nodemanager.resource.memory-mb to something more than 27G. What you can increase yarn.nodemanager.resource.memory-mb to will depend on your available memory on a node.
For Ambari:
Adjust spark.executor.memory and spark.yarn.executor.memoryOverhead in the SPSS Analytic Server service configuration under "Configs -> Custom analytics.cfg"
Adjust yarn.nodemanager.resource.memory-mb in the YARN service configuration under the "Settings" tab and "Memory Node" slider.
For Cloudera Manager:
Adjust spark.executor.memory and spark.yarn.executor.memoryOverhead in the SPSS Analytic Server service configuration under "Analytic Server Advanced Configuration Snippet (Safety Valve) for analyticserver-conf/config.properties"
Adjust yarn.nodemanager.resource.memory-mb in the YARN service configuration under "Container Memory yarn.nodemanager.resource.memory-mb".