<!--HTML CODE-->
<html>
<head>
<!-- CSS CODE-->
<style>
/* Background Image */
body
{
background-image: url(Background.png); /* Used to add the Background Picture */
background-position: center; /* Used to center the Picture into the screen */
background-size:cover; /* Used to cover the picture in the entire screen */
background-repeat:repeat; /* Used to repeat the Background Picture */
background-attachment: fixed; /* Used to keep the background fixed when scrolling */
}
#p1
{
color:aqua;
}
</style>
<head>
<body>
<header>
</header>
<main>
<p id="p1"> GeorgeK@portfolio:~@</p>
</main>
<footer>
</footer>
</body>
</html>
THIS IS MY CODE
Why is this code different?
The difference is when and where you create the task. There is a big difference between
... Task.Run(async () => await CreateTheTaskInsideTheLambda().ConfigureAwait(false))...
and
var theTask = CreateTheTaskOutsideOfLambda();
... Task.Run(async () => await theTask.ConfigureAwait(false))...
The lambda is executed on the thread pool, so CreateTheTaskInsideTheLambda()
cannot catch any ambient SynchronizationContext. .ConfigureAwait(false)
here changes nothing, it's not necessary.
CreateTheTaskOutsideOfLambda()
on the other hand may catch SynchronizationContext on the first await
inside, if that await
doesn't have .ConfigureAwait(false)
. This may cause a deadlock.
Again .ConfigureAwait(false)
in Task.Run(async () => await theTask.ConfigureAwait(false))
changes nothing.
Did build a persistence plugin that supports asynchronous storages in IndexedDB: https://github.com/erlihs/pinia-plugin-storage
This turned out to be a version difference in surveyJS.
In our UAT environment (older surveyJS version), using "maxValueExpression": "today()"
does not work on a question with "inputType": "month"
. Any month value triggers the built-in validation error, making the question unanswerable.
In our dev environment (newer surveyJS), the same configuration works as expected. Users can select the current or past month, and selecting future months is blocked.
Resolution
Upgrade to the latest SurveyJS. After upgrading, the configuration below behaves correctly:
{
"pages": [
{
"elements": [
{
"type": "text",
"name": "m",
"title": "Month",
"inputType": "month",
"isRequired": true,
"maxValueExpression": "today()"
}
]
}
]
}
Why others couldn't reproduce
They were testing on newer SurveyJS versions where this issue has already been fixed.
so for everyone who googles this in the future, on mac it's a little icon like a rectangle between square brackets. looks like this: [â]
That's what you should do with its suggestion there, before the code having issue, just add this command there:
@kotlin.ExperimentalStdlibApi
Or you can upgrade your IntelliJ to latest version and it also solved this issue. I got same one there and both ways work for me.
First WildFly 14 is quite old so I'm not sure what is supported and you should definitively upgrade (as 37.0.1 is out now). https://docs.wildfly.org/wildfly-proposals/microprofile/WFLY-11529_subsystem_metrics.html shows that metrics are exposed as Prometheus data to be consumed. I'm pretty sure yo can find documentation on how to do that on the web like https://www.mastertheboss.com/jbossas/monitoring/using-elk-stack-to-collect-wildfly-jboss-eap-metrics/
This is still one of the first results in Google so thought I'd answer even though it is an old post.
I did a spawned filter for GeoIP of email servers a bit ago. Code is on github if anyone wants it.
Size limit, probably. Did build a persistence plugin that supports asynchronous storages in IndexedDB: https://github.com/erlihs/pinia-plugin-storage
I know this is super late, but hopefully this helps anyone else needing a workaround for such function. I had a similar requirement. Permit me to tale my requirement for better context and to those who have similar problem. Except that in my case it wasn't a 5th or 30th record. It's expected to be dynamic.
On a stocks market analysis project, each day has a market record, but days aren't sequential there are gaps e.g weekends, public holidays etc. Depending on user's input the program can compute or compare across a dynamic timeline e.g 5 market days = 1 week, 2W, 3W, 52W comparison etc. Calendar isn't reliable here. Since data is tied to trading days, not calendar days. In my case it became expedient to leverage row number.
E.g. if date is 2024-08-05 and row_number 53,505. I can look up 25 market days or 300 records away to compute growth etc.
Back to the Answer.
I used Django's annotate()
function with a subquery that leverages PostgreSQL's window function to filter the queryset. The answer q = list(qs)
above would suffice in cases where data isn't much. I wanted to avoid materializing a large queryset into a list, which would be inefficient.
PostgreSQL's ROW_NUMBER() window function. The SQL query looked something like this:
SELECT subquery.row_num FROM (SELECT id, ROW_NUMBER() OVER (ORDER BY id ASC) as row_num FROM {table_name}) subquery WHERE subquery.id = {table_name}.id
Here's how I implemented it in my Django workflow:
from django.db.models.expressions import RawSQL
class YourModel
...
@classmethod
def get_offset_record(cls, record_id, offset):
"""
Returns X number of market record (days) ago
"""
table_name = cls._meta.db_table
qs = (
cls.objects.all()
.annotate(
row_number=RawSQL(
f"(SELECT subquery.row_num FROM (SELECT id, ROW_NUMBER() OVER (ORDER BY id ASC) as row_num FROM {table_name}) subquery WHERE subquery.id = {table_name}.id)",
[],
output_field=models.IntegerField(),
)
)
.order_by("id")
)
try:
current_row = qs.filter(pk=record_id).first()
target_row_number = current_row.row_number - offset
return qs.get(row_number=target_row_number)
except cls.DoesNotExist:
return None
i'm aware there's a from django.db.models.functions import RowNumber
But i find the raw sql easier to use.
I hope this helps someone, Cheers!
There is also set key opaque (see also https://superuser.com/questions/1551391 and Set custom background color for key in Gnuplot).
Igbinake, A. O., and C. O. Aliyegbenoma. "Estimation and Optimization of Tensile Strain in Mild Steel." Journal of Applied Sciences and Environmental Management 29.5 (2025): 1554-1559.
In the request headers, we can see:
Provisional headers are shown
It means the request is cached, Chrome doesn't need to execute it because the answer is in the cache. Cookies aren't cached, so their headers are missing.
Ctrl + Shift + R
.I know this thread is really old, but you can create PR templates these days - however they do not (yet) support the YAML form-field version, like issues templates do. https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/creating-a-pull-request-template-for-your-repository
Open Visual Studio Installer, then open Modify, click additional project template (previous version) and .net framework project and item templates, then install them. Finally, open visual studio and enjoy!!!
It has been a while but I can confirm that everything works fine as long as the secret rotation is done one step at a time:
- Rotating primary secret
- Deploy Pulumi
- Rotate Secondary secret
Select the row containing the column header names.
Copy this row.
In a blank area, right-click, and select the "Transpose" button.
Whenever we see this error. Follow the below steps
Step 1: Find out whether the username has added into the GitBash or not
CMD: git config --list
Step 2: Add the username which is similar to github. Go to Github --> Profile --> You will see your name and next something like this basha_fxd
CMD: git config --global user.name "basha_fxd"
Step 3: Run the same code Step 1. You will see that your username has added.
Step 4: Run the below code and it will take your success.
git commit -m 'Updated the Code
I was looking at this issue since I've been using remote compute for development, and with the workspace stored in azure blob storage making storing node_modules in the workspace inefficient and slow, and sometimes bricks my remote compute. Instead, I need to have node_modules stored locally on the azure remote compute instance - but since it's outside of my workspace I can't sync the package.json with changes that are live in the repository. I found:
symlink home/azureuser/.../package.json
pointing to frontend/package.json
then symlink frontend/node_modules
pointing to home/azureuser/.../node_modules
Basically, symlinking in opposite directions, one (package.json
) reading from the workspace which is live and synced with the repo, and the other (node_modules
) then reading from the remote compute
Another solution would be to have the entire workspace on the compute, but it's not company practice
After spending an entire afternoon going down the Google rabbit hole, I found a post here that referenced another post that appeared to be broken but the Wayback Machine came to the rescue here. After finding what I needed, I realized a couple of SO questions came close to providing an answer indirectly (here and here) but the information wasn't presented in a way I would have picked up on at the time.
For DB2 (i Series at least) use an "x" outside the single quotes with the hex string inside them, like so:
SELECT * FROM [schema].[tablename] WHERE fieldname=x'A0B1C2D3'
import seaborn as sns
sns.set_style("darkgrid")
It looks like the plot was previously styled with Seaborn. You might try adding the block above to set it again.
go to your /User/your_user/
type - open .zshrc
and go to the line specified after .zshrc in the error
/Users/mellon/.zshrc:12(<-- here): parse error near `-y'
remove/fix the error and you are good to go
Now the matplotlibs has changed to a bit modern style that is why you are seeing difference in the plots. Although you can still use the old style of plots using this line in your code -
plt.style.use('classic')
Seems that Vert.x is delegating all JSON parsing to Jackson (com.fasterxml.jackson.databind.ObjectMapper under the hood).
Using the following before creating OpenAPIContract seems to fix the issue:
import com.fasterxml.jackson.databind.DeserializationFeature;import io.vertx.core.json.jackson.DatabindCodec;
// Disable using Long for integers so small numbers become IntegerDatabindCodec.mapper().configure(DeserializationFeature.USE_LONG_FOR_INTS, false);
You might want to check your dependency tree to identify the conflict, see this Stackoverflow post and this documentation as your reference.
After confirming which dependency is pulling in the older version, exclude the transitive google-api-services-storage dependency from it. Then, explicitly declare the correct version you need as a new, top-level dependency in your pom.xml or build.gradle file.
This will allow Apache Beam 2.67.0 to provide its required, compatible version as a transitive dependency, which will resolve the NoSuchMethodError because the correct version contains the getSoftDeletePolicy() method.
Simply had to update my Fastfile from
xcodes(version: "16.2")
to
xcodes(version: "16.4")
You can try a singleValueLegacyExtendedProperty for outlook to sent custom key.
create key when send
const replyMessage = {
comment: comment
message: {
singleValueExtendedProperties: [
{
"id": "String {guid} Name Property-name",
"value": "String"
}
],
}
};
example guid: 66f5a359-4659-4830-9070-00040ec6ac6e
and on the event side you can fetch with expand.
const message = await this.graphClient
.api(`/me/messages/${messageId}`)
.expand("singleValueExtendedProperties($filter=id eq 'String {guid} Name X-CRM-IGNORE')")
.get();
Hey here is the resources.
NAV 2009 is trickyâits âclassicâ version has pretty limited integration options, mostly via direct SQL access, flat file export/import, or XML ports. The âRTCâ/web services layer is more robust in NAV 2015, which supports OData and SOAP web services for exposing entities like customers and contacts.
For NAV 2009, youâll likely end up using XML ports or automating flat file exports, then building something to sync those with Salesforce (either on a schedule or triggered). Direct SQL access is possible but not recommended unless youâre careful about data consistency and NAVâs business logic.
Once you upgrade to NAV 2015, things get easierâyou can publish pages or codeunits as web services and consume them directly from Salesforce or an integration middleware. Youâd expose the relevant entities (contacts, accounts, etc.) and pull data using standard web service calls.
If you need to write back from Salesforce to NAV, youâll need to set up codeunits for that purpose and expose them as web services. Authentication and permissions can be a hassle, so plan some time for that.
In short, integration is much smoother in NAV 2015, but doable in 2009 with more workarounds. If the upgrade is coming soon, it might be worth waiting unless the client needs something ASAP.
You are asking GraphQL to instantiate an abstract class. That is simply impossible.
Change your declaration in such a way Animal
is not abstract any longer.
Okay, I see what's going on here. Your column has VLOOKUP formulas dragged down, so even the "empty" ones are returning an empty string (""), and Excel treats those as non-blank cells for counting purposes. That's why SUBTOTAL(3, ...) is counting everything with a formula, including the blanks. And your SUMPRODUCT attempt is skipping them all because every cell has a formula in it. Let's fix this step by step. I'm assuming you want to count the number of cells in that column (say, J5:J2000 based on your example) that actually have a value from the VLOOKUP (not just ""), and you want to use something like SUBTOTAL to respect any filters or hidden rows you might have.First, confirm your goal: If you're trying to sum the values instead of count them, let me know because that changes things (e.g., if it's numbers, SUBTOTAL(9, ...) might already work fine since "" gets treated as 0). But based on what you described, it sounds like a count of non-blank results. If the data from VLOOKUP is always numbers, we can use a simpler trick with SUBTOTAL(2, ...), which counts only numeric cells and ignores text like "". But if it's text or mixed, we'll need a different approach. For now, I'll give you a general solution that works for any data type.Here's how to set it up without a helper column, using a formula that combines SUMPRODUCT and SUBTOTAL to count only visible cells (ignoring filters) where the value isn't "".
Pick the cell where you want this subtotal to go (probably below your data range or in a summary spot).
Enter this formula, adjusting the range to match yours (I'm using J5:J2000 as an example, but swap in L5:L16282 if that's your actual column):=SUMPRODUCT(--(J5:J2000<>""), SUBTOTAL(3, OFFSET(J5, ROW(J5:J2000)-ROW(J5), 0)))Press Enter (or Ctrl+Shift+Enter if you're on an older Excel version that needs array formulasâmost modern ones handle it automatically).
What this does in simple terms:
The (J5:J2000<>"") part checks each cell to see if it's not an empty string, turning matches into 1s and non-matches into 0s.
The SUBTOTAL(3, OFFSET(...)) part creates an array of 1s for visible rows and 0s for hidden/filtered rows.
SUMPRODUCT multiplies them together and adds up the results, so you only count the visible cells that aren't "".
Test it out: Apply a filter to your data (like on another column) to hide some rows, and watch the subtotal update automaticallyâit should only count the visible non-blank ones. If you have no filters, it'll just act like a smart COUNTIF that skips the "" cells.
If this feels a bit heavy for a huge range like 16,000 rows (it might calculate slowly), here's an alternative with a helper column, which is lighter on performance:
Add a new column next to your data, say column K starting at K5.
In K5, put: =IF(J5<>"", 1, "")
Drag that formula down to match your range (all the way to K2000 or whatever).
Now, in your subtotal cell, use: =SUBTOTAL(9, K5:K2000)
This sums the 1s in the helper column, which effectively counts the non-"" cells in J, and SUBTOTAL(9) ignores any filtered rows. You can hide the helper column if it clutters things up.
If your VLOOKUP is always returning numbers (not text), reply and tell meâthat lets us simplify to just =SUBTOTAL(2, J5:J2000), since it counts only numeric cells and skips "" (which is text).
In this blog post, you can see how to automate and accelerate chunk downloads using curl
with parallel threads in Python.
https://remotalks.blogspot.com/2025/07/download-large-files-in-chunks_19.html
<preference name="scheme" value="app"/>
<preference name="hostname" value="localhost"/>
adding the code to config.xml solve my problem.
Looking at the awnser, its probaly scaling. i would put it at a feature with the libary itself, but Im not to sure since I only really use tkkbootstrap for my GUI's.
Thank you for your quick reply and your suggestions. We initially implemented logging by creating a custom directory inside wp-content, and this approach worked well on most environments. Here's the code we used:
function site_file() {
$log_dir = WP_CONTENT_DIR . '/site-logs';
// Create directory if it doesn't exist
if (!file_exists($log_dir)) {
mkdir($log_dir, 0755, true);
file_put_contents($log_dir . '/index.php', "<?php // Silence is golden");
file_put_contents($log_dir . '/.htaccess', "Deny from all\n");
}
return $log_dir . '/site.log';
}
However, due to WordPress compliance guidelines and common restrictions on shared hosting, we cannot create writable files outside the uploads folder.So we updated our implementation to fallback to wp_upload_dir() when writing to the custom directory failed. Here's a simplified version of the updated logic:
$root_dir = dirname(ABSPATH);
$log_dir = trailingslashit($root_dir) . 'site-logs';
// Fall back to uploads folder if not writable
if (!wp_mkdir_p($log_dir) || !is_writable($log_dir)) {
$upload_dir = wp_upload_dir();
$log_dir = trailingslashit($upload_dir['basedir']) . 'site-logs';
if (!file_exists($log_dir)) {
wp_mkdir_p($log_dir);
}
// Add basic protections
if (!file_exists($log_dir . '/index.php')) {
@file_put_contents($log_dir . '/index.php', "<?php\n// Silence is golden.\nexit;");
}
if (!file_exists($log_dir . '/.htaccess')) {
@file_put_contents($log_dir . '/.htaccess', "Deny from all\n");
}
// Generate obfuscated log filename
$unique = substr(md5(wp_salt() . get_current_blog_id()), 0, 12);
self::$log_file = trailingslashit($log_dir) . "site-{$unique}.log";
}
This fallback ensures logging works even in restrictive hosting environments, which is important for plugin compatibility. We do not log sensitive data, and we add basic protections like .htaccess and obfuscation.
On Nginx servers, we realize .htaccess is ignored, and the file remains publicly accessible if its path is known â which is the core issue we're trying to mitigate without server-level config access.
Memory Safety in C++ is not really possible.
Here is why: https://sappeur.di-fg.de/WhyCandCppCannotBeMemorySafe.html
The best you can do is to be very disciplined, follow KISS and use modern C++.
I contacted Twilio support and got the feedback, that my account is connected to region Ireland (ie1). So the Twilio Client Constructor has to look like this:
client = Client(
account_sid=account_sid,
username=api_key_sid,
password=api_key_secret,
edge="dublin",
region="ie1",
)
So be aware of the credentials you use.
I have a working setup but this error sometimes(very rarely) still happens and then fixes itself. Without any changes in the infra.
I'm Gonna Teach You New Skills And Rout In Gorilla Tag So You Can Become A Proand If You Wanna Film A Video Or You Wanna Chill And Have Fun | Got You For $2.5 And The 5 is Important.
did you succeed? I am trying to do the same thing to migrate my on-premise domain to EntraID
The problem occurs in the minification and shrink process. It is necessary to create an exception with a progrardFile removing the ExoPlayer class.
(Scenario) --> In the container instance
Even though i've given like below i can't able to curl....
env:
- name: OPTION_LIBS
value: ignite-kubernetes,ignite-rest-http
so done below:
netstat -tulnp
and i didn't find any http 8080 in the listeners.... and configured connectorConfiguration by using the below code in config
<property name="connectorConfiguration">
<bean class="org.apache.ignite.configuration.ConnectorConfiguration">
<property name="host" value="0.0.0.0"/>
<property name="port" value="8080"/>
</bean>
</property>
Then i can confirm that http server is started but in the name TCP binary (I'm expecting in the HTTP).... confirmed from the logs
[11:41:19,261][INFO][main][GridTcpRestProtocol] Command protocol successfully started [name=TCP binary, host=/0.0.0.0, port=8080]
so tried to curl
wget -qO- http://127.0.0.1:8080
wget: error getting response
and in the logs i've got below warning in the logs:
[12:06:56,874][WARNING][grid-nio-worker-tcp-rest-3-#42][GridTcpRestProtocol] Client disconnected abruptly due to network connection loss or because the connection was left open on application shutdown. [cls=class o.a.i.i.util.nio.GridNioException, msg=Failed to parse incoming packet (invalid packet start) [ses=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0 lim=90 cap=8192], super=AbstractNioClientWorker [idx=3, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-3, igniteInstanceName=null, finished=false, heartbeatTs=1757506016868, hashCode=1109163085, interrupted=false, runner=grid-nio-worker-tcp-rest-3-#42]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=o.a.i.i.processors.metric.impl.LongAdderMetric@27c2862d, super=GridNioSessionImpl [locAddr=/127.0.0.1:8080, rmtAddr=/127.0.0.1:59486, createTime=1757506016868, closeTime=0, bytesSent=0, bytesRcvd=90, bytesSent0=0, bytesRcvd0=90, sndSchedTime=1757506016868, lastSndTime=1757506016868, lastRcvTime=1757506016868, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=o.a.i.marshaller.IgniteMarshallerClassFilter@fbbedd80], routerClient=false], directMode=false]], accepted=true, markedForClose=false]], b=47]]
[12:06:56,874][WARNING][grid-nio-worker-tcp-rest-3-#42][GridTcpRestProtocol] Closed client session due to exception [ses=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0 lim=90 cap=8192], super=AbstractNioClientWorker [idx=3, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-3, igniteInstanceName=null, finished=false, heartbeatTs=1757506016868, hashCode=1109163085, interrupted=false, runner=grid-nio-worker-tcp-rest-3-#42]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=o.a.i.i.processors.metric.impl.LongAdderMetric@27c2862d, super=GridNioSessionImpl [locAddr=/127.0.0.1:8080, rmtAddr=/127.0.0.1:59486, createTime=1757506016868, closeTime=1757506016868, bytesSent=0, bytesRcvd=90, bytesSent0=0, bytesRcvd0=90, sndSchedTime=1757506016868, lastSndTime=1757506016868, lastRcvTime=1757506016868, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=o.a.i.marshaller.IgniteMarshallerClassFilter@fbbedd80], routerClient=false], directMode=false]], accepted=true, markedForClose=true]], msg=Failed to parse incoming packet (invalid packet start) [ses=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0 lim=90 cap=8192], super=AbstractNioClientWorker [idx=3, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-3, igniteInstanceName=null, finished=false, heartbeatTs=1757506016868, hashCode=1109163085, interrupted=false, runner=grid-nio-worker-tcp-rest-3-#42]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=o.a.i.i.processors.metric.impl.LongAdderMetric@27c2862d, super=GridNioSessionImpl [locAddr=/127.0.0.1:8080, rmtAddr=/127.0.0.1:59486, createTime=1757506016868, closeTime=0, bytesSent=0, bytesRcvd=90, bytesSent0=0, bytesRcvd0=90, sndSchedTime=1757506016868, lastSndTime=1757506016868, lastRcvTime=1757506016868, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=o.a.i.marshaller.IgniteMarshallerClassFilter@fbbedd80], routerClient=false], directMode=false]], accepted=true, markedForClose=false]], b=47]]
can anyone please help me....
Determining the attribute type from Akeneo attribute value from payload dynamically is not a reliable way as values may be present or absent in payload based on data in Akeneo or its family attributes association in Akeneo.
Better you create a configuration in Magento for attribute mapping between Magento and Akeneo. This would be one time config. And update it as and when there is new attribute introduced is Akeneo.
Then update your logic using that mapping + available attributes in payload and create/update product in Magento accordingly dynamically.
Your PINN may be overfitting because the network is learning to satisfy the boundary and initial conditions without correctly enforcing the underlying differential equation across the entire domain. The high weight given to the data loss (the training points) causes the model to prioritize fitting those points perfectly, neglecting the physics-based loss.
You MAY try to use a residual-based curriculum learning approach: try yo dynamically sample more points from regions where the physics-based loss is high. The model will focus on the areas where it is failing to satisfy the governing differential equation, this may improve its generalization.
To provide a better answer, we may need more details and clarification.
Right click on test folder in your project, then click on " Run 'test in test' " then wait for processing and check your console by clicking on tick in the left top corner of the console.
For future readers who wanted to "connect to a wss" instead of serving one, you can use HttpClient#wss
instead of HttpClient#webSocket
We used a CloudFront viewer-request 301 redirect. and it works good. The only âdownsideâ is cosmetic: the browser URL changes to the Grafana workspace hostname.
Is there any solution to this? I have the same problem.
As already stated in the comments by @yotheguitou you have to commit to save the changes made to your SQLite since the last commit.
After you executed a DELETE
statement you need to call connection.commit()
.
cursor.execute("DELETE FROM ticket WHERE ROWID = (SELECT MAX(ROWID) FROM ticket)")
connection.commit()
If you want to automatically commit after any executed statement, set isolation_level=None
.
conn = sqlite3.connect("example.db", isolation_level=None)
just 1 prefix backslash
\cp <source_file> <destiation addr>
Since android API 31: Build.SOC_MODEL
https://developer.android.com/reference/android/os/Build#SOC_MODEL
I had the same issue, and my problem was that I was trying to call http://www.example.com instead of https://www.example.com, and my nginx server was trying to redirect the request from http to https, which preflight didn't like.
The changes during the sequence updates are double protected: by an exclusive lock of the sequence block and by the SEQ latch. Itâs not possible for two sessions to update simultaneously not only the same sequence but any two sequences in the same database.
If you would update a sequence inside a transaction and the AI logging is enabled for a database then you could restore the database to the state after any transaction and you could read the last sequence value. I donât believe you will see the duplicate values for two transactions.
BTW, what is your OpenEdge version? There were a few changes in the ways how Progress works with the sequences. The most recent one was in 12.2
I have recently had an issue from an update around that time which had this [completion](https://code.visualstudio.com/docs/copilot/ai-powered-suggestions) added with a green tab:
For this specific problem, go to settings.json, User, then search for @tag:nextEditSuggestion and change the following setting from Editor>Inline Suggest>Edits: Allow Code Shifting from always to never.
How I easily accessed the menu was when the green tab came up came up, hover over it and click settings.
Note - to activate the changes, I had to close the file after changing the settings and re open the file.
Just copy goalseek from VBA, there's one line
Now we have a new tool -- mise.
curl https://mise.run | sh
mise use -g [email protected]
Also, mise support uv. If you have installed uv (for example, with mise use -g uv@latest
), mise will use it to create virtual environments.
Tampering with the scale sets that Kubernetes creates is not supported which seems to be the case. Most likely someone has tried to install this extension directly on the scale set which resulted in failure without being able to remove it. As it has failed there is not actual installation but the extension resource is in some state that cannot be removed. That is also causing the issue when you try to apply the kubernetes configuration via Bicep. My advise would be to re-create the AKS cluster or try to replace the current system pool with another one. You can also try to contact Azure Support to see if they can force the removal of the extension but it is unclear if they will provide support for something they have explicitly said it is not supported.
DigiCert timestamp server (http://timestamp.digicert.com) uses HTTP, not HTTPS. The error you were seeing occurred because the signing tool couldn't reach the HTTP timestamp server through your proxy.
By setting HTTP_PROXY, your signing process can now properly route the HTTP requests to the timestamp server through your corporate proxy, which should resolve the error you were encountering.
The issue was that I was importing toaster in View from this way:
import { toast } from 'vue-sonner'
But actually, if I installed Sonner as a component (in /ui), I need to import it in that way:
import {
toast
} from '@ist/ui';
and I also must insert that:
import { toast } from 'vue-sonner'
into ui/index.ts
For anyone looking at this now, as noted when you have strings ("?") mixed in with ints you can do the following in pandas.
# convert the data to int64/float64 turning anything (i.e '?') that can't be converted into nan
df["Bare Nuclei"] = pd.to_numeric(df["Bare Nuclei"], errors="coerce")
# if you really need it as int you can then do the following, Int64 can handle NaN values so is useful in these situations
df["Bare Nuclei"] = df["Bare Nuclei"].astype("Int64")
Another way to do it is
use Data::Printer {
class => {
expand => 'all', # default 1
},
};
https://metacpan.org/release/GARU/Data-Printer-0.35/view/lib/Data/Printer.pm#CUSTOMIZATION
As of now we are 12 years in the future, I'm not sure when the expand
param was added, but pretty useful!
Thank you for this post.
I'm getting en error : "No action specified" when using the command WinSCP.com /keygen ....
Do you know what that means ?
Regards,
HD
Use { Count: > 0 }
to check if the collection is not null
and if the count is greater than 0 at the same time.
if(yourCollection is { Count: > 0 })
{ }
I solved it by not using the combined template. Just created Angular 20 project with CLI in VS Code and Web API in Visual Studio, then ran both (ng serve for Angular and run API from VS). Worked fine.
Rather than a misuse of the API, this appears to have been a driver-related issue on AMD's end, which is not present in driver version 31.0.21923.11000.
To find the time shift from UTC on a running computer, the code below is the shortest solution I have found:
Function UTCTime()
'Get the UTC
'function from: https://stackoverflow.com/questions/1600875/how-to-get-the-current-datetime-in-utc-from-an-excel-vba-macro
Dim dt As Object, utc As Date
Set dt = CreateObject("WbemScripting.SWbemDateTime")
dt.SetVarDate Now
utc = dt.GetVarDate(False)
UTCTime = utc
End Function
'function to calculate time shift from UTC
Function TimeZone()
TimeZone = Hour(Now() - UTCTime())
End Function
In GoLand I had to remove all .idea files with rm .idea -rf and reopen the project to make this error disappear.
This error happens to me very often, I m an expert developer and, yes, sometimes no reason for this error.
If you can exclude every type of "real" error as described in the other answers, simply try to comment the variable declaration (in this case: dim SixTables), execute the code and you should get the error "variable not declared". At this point you are sure that your code doesn't contain a double definition.
Uncomment the declaration and execute again: kind of magic - it works!
Generally it happens when you import modules containing the same declaration and you remove one of them.
Example:
Module1 and Module2 contain both a public varX.
Removing (or commenting) one of two, you still receive "ambiguous name".
No reason, simply VBA failure.
Removing both and executing VBA realizes varX is not declared. Now you are in a consistent state.
Set again your declaration and the problem is solved.
Because I m old, I still use Enum construct and in this case this error is quite common.
My work-around for this was to split into two table visuals side-by-side. The columns you want frozen displayed to the left, and the non-frozen columns to the right. Not ideal but is probably "good enough" in a lot of cases.
With Sling Models with @Exporter annotations, AEM component data can be exposed as JSON and accessed through the component's JSON endpoint.
Found the issue.
It was not Android Studio related, but it was an Mac OS setting that was somehow turned on out of nowhere.
I disabled it by going to Settings > Accessibility > Hover Text > Disabled "Hover Text"
// ...
import androidx.compose.ui.tooling.preview.Preview
@Composable
fun MessageCard(name: String) {
Text(text = "Hello $name!")
}
@Preview
@Composable
fun PreviewMessageCard() {
MessageCard("Android")
}
So I partly figured it out. Turns out that I was trying to put the event listeners on the mailbox itself instead of the item, which made it so the function was never triggered since the event was fired on the item. Haven't found a solution for the title but here is the new base, seems to work just fine (will replace the console.log with the handler functions I have).
useEffect(() => {
const mailbox = Office.context.mailbox;
if(mailbox.item?.itemType==='appointment'){
Office.context.mailbox.item?.addHandlerAsync(Office.EventType.AppointmentTimeChanged, (msg:any)=>{console.log(msg)});
Office.context.mailbox.item?.addHandlerAsync(Office.EventType.RecipientsChanged, (msg:any)=>{console.log(msg)});
}
}, []);
Brad, did you manage to implement the solution the way you wanted?
I am currently facing a similar issue, I'd like to insert an actual blank row in the flextable itself, not the original data.frame.
The "padding" solution is not ideal for me.
Thanks!
-DskipPublishing
did the trick.
maybe you can just try
tc.SetNoDelay(false)
if not yet tried.
Had similar issue and found this link with instructions to fix: https://github.com/twosixlabs/armory/issues/156
I ended up with a workaround based on Hannes' answer. It doesn't do exactly what i want, but close enough.
I couldn't just add a $target
variable to my pipeline, it didn't seem to take the updated value from earlier jobs. Instead, i found the dotenv feature of gitlab CI which allowed me to pass the variable to a later script.
I also scrapped the dbg2release job, it's now part of the debug job.
I now have 2 stages: target
which has optional manual jobs for picking "debug"(+dbg2release) or "release", and package
which has a manual job "package" to publish the package using the configuration which was selected in the previous stage.
It still has annoyances:
Users can run the "package" job even if they didn't pick a target.
Users have to start 2 manual jobs
Users can start both the "debug" and "release" jobs. In that case, the first one to run is ignored.
stages:
- target
- package
debug:
stage: target
rules:
- if: '$CI_COMMIT_BRANCH == "develop" && $CI_PIPELINE_SOURCE == "push"
when: manual
allow_failure: true
before_script:
- []
after_script:
- []
script:
- echo "TARGET=DEBUG" > target.env
artifacts:
reports:
dotenv: target.env
release:
stage: target
rules:
- if: '$CI_COMMIT_BRANCH == "develop" && $CI_PIPELINE_SOURCE == "push"
when: manual
allow_failure: true
before_script:
- []
after_script:
- []
script:
- echo "TARGET=RELEASE" > target.env
artifacts:
reports:
dotenv: target.env
package:
stage: package
rules:
- if: '$CI_COMMIT_BRANCH == "develop" && $CI_PIPELINE_SOURCE == "push"
when: manual
allow_failure: false
script:
- echo $TARGET
- do things
I could have just added a job variable to the "package" job, and have users enter the variable value manually when running it. Gitlab's UI for doing that is a bit cumbersome and hidden, so i really wanted to avoid it.
If you want to push your changes in the developer
branch of "MyAwesomeProject" and "ClientProject" then i think you can use a git alias (a git shortcut command) .
To push to both remote branches:
git config alias.pushall '!git push origin developer && git push client developer'
Then run:
git pushall
Please let me know if there is other ways or Improvements.
Downvoted already. You people are such cunts.
Turns out that some twat at Microsoft has renamed requests
to AppRequests
depending on fuck knows what.
where can I input below code section?Which file?
<Target Name="EffectCompile" Condition="'@(Effect)' != '' ">
<Exec Command=""$(MSBuildProgramFiles32)\Windows Kits\10\bin\10.0.22621.0\x64\fxc.exe" /T ps_3_0 /Fo %(Effect.RelativeDir)%(Effect.FileName).ps %(Effect.Identity)"/>
<!-- Add this if you need to embed the file as WPF resources -->
<ItemGroup>
<Resource Include="%(Effect.RelativeDir)%(Effect.FileName).ps" />
</ItemGroup>
</Target>
Instead of using '
and "
directly, we can use hexcode x22
to represent single quotes and x27
to represent double quotes in the regex string.
r"^[\x22\x27]+$"
Actually there is an API for Direct Admin which is described in the docs here:
https://docs.directadmin.com/developer/api/#api-access
And here you can find more details:
for me there was a different number of columns in the file at a certain line. it helped passing names arguement when reading csv (which fixed the column count).
fix different column count in file
This solved it for me:
In short: A Windows update broke the IIS
In the file: C:\Windows\System32\inetsrv\config\administration.config
the %WINDOWS_PUBLIC_KEY_TOKEN% is no longer valid
Replace it with 31bf3856ad364e35
Done!
https://learn.microsoft.com/en-us/answers/questions/5544355/iis-no-longer-displaying-my-websites
Your Dice()
function doesn't return any values. Change it to:
def Dice(listabc):
return random.choice(listabc)
I wanted to share my observations with "ODBC Driver 18 for SQL Server".
Doing SQLSetConnectAttr(..., SQL_ATTR_ENLIST_IN_XA, OP_START, ...) and then SQLSetConnectAttr(..., SQL_ATTR_TXN_ISOLATION, ...), gives me the error in SQLGetDiagRec() "Operation invalid at this time".
Doing your suggested XID Data layout, didn't work for version 18.
I get XA Error (-8) (XAER_DUPID, Duplicate XID) whenever i do gtrid (length 24) at data[0] and bqual (length 12) at data[64].
My guess is, that MSDTC will read this like other DB's now:
gtrid = data[0] and bqual = data[24]
bqual will be always the same (0') => Duplicate XID.
"The incoming tabular data stream (TDS) protocol stream is incorrect. The stream ended unexpectedly."
If someone has this error at SQLGetDiagRec(), make sure you have SQL_AUTOCOMMIT_ON when enlisting XID's and SQL_AUTOCOMMIT_OFF when done enlisting.
This has costed me a lot of time to figure out.
Besides executing the Query from @nfrmtkr above, I would suggest setting TRACE_XA to 4 (Verbose) and the TraceFilePath.
You can see each XA Call with a few details in the log file.
For Windows: https://learn.microsoft.com/en-us/troubleshoot/windows/win32/enable-diagnostic-tracing-ms-dtc
On Linux you can use the tool mssql-conf.
/opt/mssql/bin/mssql-conf set distributedtransaction.trace_xa 4
/opt/mssql/bin/mssql-conf set distributedtransaction.tracefilepath /tmp
Restart SQL Server afterwards.
The file will look like this /tmp/MSDTC-sqlservr.exe-444.log
Just change the playground, and it will work. It took me two days.
So, I got it working.
When I executed the migration from angular to NX monorepo it created tsconfig.app.json and tsconfig.spec.json under apps/my-app/...
I received errors saying "cannot find import path", however I clearly defined everything in the tsconfig.base.json
That is.. well.. I copied tsconfig.app.json and named it tsconfig.json, you know what? It worked..
Based on your use case, I would actually recommend using the Connect feature from Stripe along with Destination Charges. Destination Charges allow your customers to transact with your platform for products or services listed by your connected accounts, removing the need to recreate your products across different connected accounts. The products will exist solely on your platform account.
Also, with Destination Charges, you can set the on_behalf_of parameter, this means that the charges will settle in the connected accountâs and currency.
You'd also have to choose the type of connected account you'd like to associate with the charge. I would actually suggest going with the Express Account type, as it requires the least effort to integrate and pairs with Destination Charges.
You can see the other charge types in this table and other account types here , each of which has its own use cases.
If [ System.Security.Authentication.AuthenticationException ] is thrown, with [ RemoteCertificateNameMismatch ] , but it is confirmed/certain that:
Certificate CommonName/Subject and/or SubjAltNames are containing the list of UFQDNs/FQDNs that the SOAP / WCF client is configured
Trust anchors are in-place.
Other Microsoft .net/ASP clients (Internet Explorer/Edge/etc.) have no certificate validation errors at the same URL endpoint.
In this case, we need/want dump , with DEBUG statements, the text/substrings that were expecting and what it found in the remote certificate (SubjAlName)during TLS handshake.
âRemoteCertificateNameMismatch DEBUG: Expecting [ X (array of subjAltNames) ] but found [ Y ]â
Are these variables [X] and [Y] already already populated that we can debug/printf (log or Stdout) in the try{}/catch{} ?
We are working with a development subcontractor, and do not have direct access to code, but we want to assist them.
You should read up on exactly what latent variables are. These are unmeasured variables that are estimated by taking shared variance between a variety of indicators. They are NOT for linking conceptually connected measures, that are statistically unrelated, such as gender and age within demography.
With this in mind, most or all of your latent variables do not make sense and this may have contributed to the problems with convergence. Try thinking about whether it makes sense for any of your variables to be latent variables at all or if there is a better way to analyse your data.
In fedora, the libraries are called freeglut, so the command would be
sudo dnf install freeglut freeglut-devel
Worked on NobaraOS
Morning .
Because you donât have permission to push to that Docker Hub repo!
So try to :
1. Log in with docker login
.
2. and docker tag 007-thebond docker.io/<your-username>/007-thebond
When you do that try re-push it .
See Percent-encoding in a URI (WikipediaÂź). You can pass actually any (ASCII) character in the URL if encoded properly.
See also RFC 3986 (Uniform Resource Identifier (URI): Generic Syntax): "2.4. When to Encode or Decode".
Most of the time Perl's CGI module does "the right thing" automatically.
Notice the two?
characters in the URI. That makes the driver fail to parse it correctly in modern versions of the MongoDB Node driver (which Mongoose 5+ uses).
mongoose.connect(
"mongodb://stackoverflow:[email protected]:31064/thirty3?authSource=admin",
{ useNewUrlParser: true, useUnifiedTopology: true }
);
Iâm Sharan from Apptrove!
Weâre building a Slack community for developers, with coding challenges, tournaments, and access to tools and resources to help you sharpen your skills. Itâs free and open â would love to see you there!
Link to join: https://join.slack.com/t/apptrovedevcommunity/shared_invite/zt-3d52zqa5s-ZZq7XNvXahXN2nZFtCN1aQ
Whatâs happening is that youâre not actually using a Dense layer the way you might expect from a 1D vector setting (e.g., after a Flatten).
How Keras Dense really works
In Keras/TensorFlow, Dense is implemented as a matrix multiplication between the last dimension of the input and the layerâs weight matrix. It does not require you to flatten the entire input tensor, nor does it care about the other dimensions.
If the input has shape (batch, H, W, C), a Dense(units=64) layer just takes the last axis C and produces output (batch, H, W, 64).
Internally, TensorFlow broadcasts the weight multiplication over all other dimensions (H and W here).
Thatâs why you donât get an error: your inputs have shapes like (batch, 1, T, 64), and Dense just treats (1, T) as âbatch-likeâ dimensions that it carries along.
Why this allows dynamic input sizes
Because the Dense operation is applied pointwise along all non-last dimensions, it doesnât matter whether T = 12 or T = 32. The only requirement is that the channel dimension (C) is fixed, since thatâs what the weight matrix expects. The temporal dimension (T) can vary freely.
So in your example:
Input: (12, 1, 32, 64) â Dense(64) â (12, 1, 32, 64)
Input: (17, 1, 12, 64) â Dense(64) â (17, 1, 12, 64)
Both work fine because Dense is applied independently at each (batch, 1, time) location.
Contrast with pooling or flattening
If you had tried to do Flatten â Dense, then yes, you would need a fixed time dimension, because flattening collapses everything into a single vector.
But using Dense âin placeâ like this behaves more like a 1x1 Conv2D: it remaps features without collapsing spatial/temporal dimensions.
TL;DR
Youâre not getting an error because Dense in Keras is defined to operate on the last axis only, broadcasting across all other axes. Itâs essentially equivalent to applying a 1x1 Conv2D across the feature dimension. Thatâs why variable-length time dimensions are supported automatically in your setup.
Scalability, security, and long-term maintenance are typically more important concerns when developing enterprise Android apps than coding speed alone. Here's a quick summary:
Programming Language
Modern, concise, and safer than Java, Kotlin is officially backed by Google. ideal option for brand-new business applications.
Java â Still supported and widely used, ideal if your business already uses a Java-based system.
Tools & Frameworks
Android Jetpack (Google libraries) â helps in lifecycle management, data storage, user interface, etc., improving the speed and cleanliness of development.
Dependency Injection â To make managing big projects easier, use Hilt or Dagger.
To ensure safe and effective API communication, use Retrofit or OkHttp.
Enterprise-Level Requirements
Use Android Enterprise's work profiles, data encryption, and secure logins like OAuth/SSO to ensure security.
Testing â To ensure quality, use automated testing tools like Robolectric, Espresso, and JUnit.
Scalability â To ensure that the application may expand without getting disorganized, take into account modular architecture, such as MVVM or Clean Architecture.
Integration of Backend and Cloud
Combine with business backends such as Google Cloud, AWS, or Azure.
If you want a speedy setup, use Firebase for analytics, push alerts, and authentication.
Use Kotlin + Jetpack libraries + safe enterprise tools for Android development in an enterprise setting. To make the software scalable and future-proof, incorporate robust testing, a modular architecture, and cloud support.
You might try:
BullModule.forRoot({
redis: {
host: "YOUR_REDIS_HOST",
port: 6379,
db: 0,
password: "YOUR_REDIS_PASSWORD",
tls: { // Specifiy the host and port credentials again here
host: "YOUR_REDIS_HOST",
port: 6379,
}
}
})