You can make modelValue
a discriminated union key by range
so TS can infer the correct type automatically. For example:
type Props =
| { range: true, modelValue: [number, number] }
| { range?: false, modelValue: number };
Then use that type in your defineProps
and defineEmits
so no casting is needed.
Maybe the unique
parameter in the column annotation could help you?
#[ORM\Column(unique: true, name: "app_id", type: Types::BIGINT)]
private ?int $appId = null;
If the user is supposed to be unique, maybe a OneToOne
Relation could be better than a ManyToOne
. I am pretty sure using OneToOne
will also generate a unique index in your migration for you, even without the unique
parameter.
#[ORM\OneToOne(inversedBy: 'userMobileApp', cascade: ['persist', 'remove'])]
#[ORM\JoinColumn(name: "user_id, "nullable: false)]
private ?User $user = null;
After adding separate configurations for the two web applications, I'm encountering an issue with the custom binding for the second web app. I already have a setup for custom binding and DNS for the first web app.
Here's a lazy solution as compared to above answers here: my XCode project threw me this error as an iPad was connected for test. I tried deleting Deriveddata, re-starting XCode etc, but none of these helped. I ended up abandoning that project and creating the new one. The new project does not throw this error anymore.
If you are on Mac and have been running your script like this python my-script.py
, you might want to try running it with sudo
. I spent 30 minutes debugging correct code before realizing that "requests" needs sudo permissions
I have the same question. Unfortunately, both links in the highlighted answer are now outdated. Does anyone have newer info on this?
For the condition I tried:
#intentName1 && #intentName2
intents.contains('intentName1') && intents.contains('intentName2')
intents.values.contains('intentName1') && intents.values.contains('intentName2')
The first two didn't throw an error but the dialog was just skipped when I entered an utterance in which both intents were recognized. The final one threw an error:
SpEL evaluation error: Expression [intents.size() > 0 && intents.values.contains('intentName1') && intents.values.contains('intentName2') && @entityName] converted to [intents.size() > 0 && intents.values.contains('intentName1') && intents.values.contains('intentName2') && entities['entityName']?.value] at position 73: EL1008E: Property or field 'values' cannot be found on object of type 'CluIntentResponseList' - maybe not public or not valid?
In the plugin developed for OPA https://github.com/EOEPCA/keycloak-opa-plugin , it seems that the Admin UI was customised (see (js/apps/admin-ui/src/clients/authorization/policy).
you have to manully allow location access from phone setting by going to the phone setting > privacy and security > location > Safari/anyother brower.
Got it — sounds like you’re trying to bypass the whole “training” aspect and just hard-code your decision logic in a tree-like form. In that case, sklearn’s DecisionTreeClassifier isn’t really the right tool, since it’s built to learn from data. A custom tree structure, like the Node
class example given, would give you more control and let you directly define each condition without needing any training step. This way, you still get the decision-tree behavior, but exactly how you’ve designed it.
Hi, try SELECT concat(REPLICATE('0', 16 - LEN(NAG)),NAG) as NAG16
where NAG is yourVarcharField and 16 is lenght that you need
Seems, this behaviour of SvelteKit is not replicable in NextJS. There is a similar feature in NextJS and it is called prerendering. But, prerendering only works for static pages.
For dynamic pages, the server components start to render on the server only after the page is navigated to. If needed, a suspense boundary can be used as a placeholder (which is displayed instantly) before the whole page is rendered.
With respect to wasted bandwidth of fetching links when it comes into viewport, @oscar-hermoso's answer of switching the prefetch option to on hover works.
After using both the frameworks, it feels as if SvelteKit is really thought out when it comes to frameworks. NextJS relies on CDN to make the site fast. SvelteKit uses a simple, but clever trick. So, when end users use the site, the SvelteKit version feels much faster.
For me I added this line to the top of requirements.txt`
file. I was able to install the packages successfully.
torch==2.2.2
I don't know whether this is any help, but I fixed a similar issue justt by putting "" arround the echo line
I would recommend taking a look at the Mongoose Networking library.
It's a lightweight open-source networking library designed specifically for embedded systems. It includes full support for most networking protocols, including MQTT. With the MQTT support, you can build not just a client, but also a MQTT broker. The library is highly portable and has support for a wide variety of microcontrollers and platforms. It can run on a baremetal environment or with an RTOS like FreeRTOS or Zephyr.
Mongoose has a solid documentation of all its features and usages and you can find an example of building a MQTT simple server here.
Heads up: I am part of the Mongoose development team. Hope this solves your problem!
vmess://eyJ0eXBlIjoibm9uZSIsImhvc3QiOiJtZy1pbm5vY2VudC1hZGp1c3Qtc2hhcGUudHJ5Y2xvdWRmbGFyZS5jb20iLCJoZWFkZXJUeXBlIjoiIiwicG9ydCI6Ijg0NDMiLCJuZXQiOiJ3cyIsImlkIjoiM2RhNWQyN2YtMDhkMC00MDc4LWY4OTAtY2Y5NTBlY2IxNzA4IiwidiI6IjIiLCJzZXJ2aWNlTmFtZSI6Im5vbmUiLCJzZWVkIjoiIiwiZnJhZ21lbnQiOiIiLCJtb2RlIjoiIiwicGF0aCI6IlwvIiwidGxzIjoidGxzIiwiYWxwbiI6IiIsImFkZCI6IjEwNC4xNi4xMjUuMzIiLCJwcyI6IvCfjqxDQU1CT0RJQS1NRVRGT05FLfCfh7jwn4esIPCfpYAiLCJmcCI6ImNocm9tZSIsInNuaSI6Im1nLWlubm9jZW50LWFkanVzdC1zaGFwZS50cnljbG91ZGZsYXJlLmNvbSIsImRldmljZUlEIjoiIiwiYWlkIjoiMCIsImV4dHJhIjoiIn0=
Adding one more suggestion for Kube-clusters
(Future reader may look ):
Check your clock is skewed or not by using these commands :chronyc tracking
or Timedatectl status
If Leap Status is Not Synchronised
then do NTP Synchronization.
The official SQLMesh documentation and source code currently focus on Slack and email as supported notification targets. There is no out-of-the-box support for Microsoft Teams mentioned.
However, since Teams supports incoming webhooks similar to Slack, you can likely adapt the Slack webhook configuration for Teams by:
Creating an Incoming Webhook in your Teams channel.
Using that webhook URL in your SQLMesh notification configuration.
Formatting the payload to match Teams' https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/connectors-using
Try configuring a Teams webhook and test sending a JSON payload from SQLMesh using the same mechanism as Slack.
What I’d do in UiPath is pretty straightforward:
Read the new data from the first Excel file using Read Range (under the Modern Excel activities).
Read the existing data from the target sheet in the second file.
Combine them — put the new data above the existing data in a single DataTable using the Merge datatable activity or just by using newsatatable.clone and importing rows in the right order.
Write the merged table back to the target sheet using Write Range.
Basically, you’re replacing the sheet content with “new rows first, then old rows” instead of trying to physically insert rows at the top in Excel, which UiPath doesn’t handle directly.
reset someClass.someProp = null; before the second render, or use beforeEach to mock and reset state properly.
With WIndows 11, it was in addition necessary for me to add sqlservr.exe to the allowed firewall apps.
I followed those instructions:
https://docs.driveworkspro.com/Topic/HowToConfigureWindowsFirewallForSQLServer
Thanks for all,
Harald
I realize that this thread is really old, but perhaps it's still alive enough to find someone to help me out. On a daily basis, I have different documents in which I need to highlight certain words (they change with every doc). I'd like an easy way to tell Google Docs to highlight the words in yellow each time. The previous posts seem to provide some info, but I can't figure out how to get any of them to run properly. I'd like to envision a Google Docs "template" to which I would copy the text. Then, I could run some type of script based on the keywords (even if I have to manually edit the script each time) to highlight the words. I could then copy that altered text into the final document. But, I need step by step instructions on how to get this
Try to wrap it in CDATA construct.
An example present in the below link shows the case:
<![CDATA[
Within this Character Data block I can
use double dashes as much as I want (along with <, &, ', and ")
*and* %MyParamEntity; will be expanded to the text
"Has been expanded" ... however, I can't use
the CEND sequence. If I need to use CEND I must escape one of the
brackets or the greater-than sign using concatenated CDATA sections.
]]>
More to read:
What does <![CDATA[]]> in XML mean?
4 0 obj
(Identity)
endobj
5 0 obj
(Adobe)
endobj
8 0 obj
<<
/Filter /FlateDecode
/Length 178320
/Length1 537536
/Type /Stream
>>
stream
xœì½
`TÅõ?>sï¾²ÙÜlÞ›lBnØ$’„‡7 „$ä "f“Ý$‹›‡»^""`DEÅ7"ZŠÔ.Ñ"*Eªˆ/ªV¢¥–úªµˆV…üÏÌÜ »
ZõÛþþí÷»ssÎçœyÏ™™3s³y ŒŠ¦B-¥õ•ÓsÿnCºXBI»ÊŠKæD;ÊŠ«JÖ>Pü‰
¦é¥eåûŸã(RÍz!Í ÓkkêW7ôªaŸž3½¾±øódý8„åùHíª©ÏÉ»ãië#P×qhµ¥ËÚÛ–¿[…ÐE' ¾ãm‹<òã½oŽCh+èê'Û{;ºV¾+N@¨û„Â2;¬î^4Y ýR(oìp.mßrìÖëÚîEHº¦Ónµ}=k)Èú‹ÆwBD؃‰Ÿ‚¾
ôÔÎ.Ï’‰5
I also have the same error, my HOMEDRIVE and HOMEPATH seem to be correct, However, when i type bash my WSL is start not the msys2 bash. I also have msys path in my environment variables so I can use certain packages natively so this could also be causing issues. Any suggestions?
Forgive me if someone already answered this, but from what I understand, it did exactly what it was told to. Your original image was mostly grey and black, so the two colors it chose to downsize to were grey and black. It doesn't matter if you set it to "L" or "RGB", since you gave it a predominantly grey and black image. As the other comment mentioned, you can create a very small image where the desired black & white palette is encoded into a minimal number of pixels, and pass this to the quantize method.
Hi this is the chatgpt version you might like this as well
private Rectangle GetCellBounds(int col, int row)
{
int x = tlp_tra_actual.GetColumnWidths().Take(col).Sum();
int y = tlp_tra_actual.GetRowHeights().Take(row).Sum();
int w = tlp_tra_actual.GetColumnWidths()[col];
int h = tlp_tra_actual.GetRowHeights()[row];
return new Rectangle(x, y, w, h);
}
void tableLayoutPanel1_CellPaint(object sender, TableLayoutCellPaintEventArgs e)
{
e.Graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
try
{
int row = e.Row;
var g = e.Graphics;
float radiusFactor = 1.4f; // 1.0 = original, >1 = bigger arc
Rectangle cellRect = GetCellBounds(0, row);
// Make radius bigger than cell min size * factor
int baseRadius = Math.Min(cellRect.Width, cellRect.Height);
int radius = (int)(baseRadius * radiusFactor);
using (GraphicsPath path = new GraphicsPath())
{
// Move starting point higher up (because arc is larger)
path.StartFigure();
path.AddLine(cellRect.Left, cellRect.Bottom - radius, cellRect.Left, cellRect.Bottom);
path.AddLine(cellRect.Left, cellRect.Bottom, cellRect.Left + radius, cellRect.Bottom);
// Bigger arc, starts at bottom and sweeps up to left
path.AddArc(
cellRect.Left, // arc X
cellRect.Bottom - radius, // arc Y
radius, // arc width
radius, // arc height
90, 90);
path.CloseFigure();
using (Brush brush = new SolidBrush(Color.FromArgb(150, Color.DarkBlue)))
{
g.FillPath(brush, path);
}
}
}
catch
{
}
}
If an optional Core Data property has a default value set in the model editor, then:
Core Data never stores nil for that property — it immediately populates new objects with the default value.
That means even if you never explicitly set it, reading it will return the default (e.g., 0), not nil.
valueForKey: will also return an NSNumber with that default value, not nil.
How to allow nil detection:
Leave it as optional.
After that, Core Data will store nil if you don’t set a value.
Now you can detect nil using valueForKey: or by declaring it as an NSNumber *.
Best practice is to use single quote all the time :
ORG1_PASSWORD='$orgOne12345'
ORG2_PASSWORD='$orgTwo180000'
ORG3_PASSWORD='ORG_Admin123'
With no quotes or double quotes the variables will be interpreted in most cases (when interpreted using bash).
Escaping each characters is too verbose and you have to think about it and do it properly each time you change the password.
References :
function test<T extends string>(arr: T[], callback: (get: (key: T) => string) => void): Promise<void> {
return Promise.resolve();
}
test(['a', 'b', 'c'], (get) => {
get('a'); //works
get('d'); // compiler failure
});
It's no perfect solution, since crontab works in months, not in weeks, but the pattern I'd suggest is:
0 3 */14 * *, which executes a job on every 14th-day (at 3 AM) (i.e. 14. and 28.), which is close to bi-weekly, but since most months are 30 or 31 days long, you actually have: An execution on the 14th, 2 weeks pass, another execution, 2 weeks + 2-3 days pass, another execution, then exactly 2 weeks pass, etc.
If it has to be exactly 14 days apart, it could be a bit more tricky.
from PIL import Image, ImageEnhance
import requests
from io import BytesIO
# Load your image (update the path if needed)
base_image = Image.open("Screenshot_20250814_101245.jpg").convert("RGBA")
# Load CapCut logo (transparent PNG from web)
logo_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/6/6b/CapCut_Logo.svg/512px-CapCut_Logo.svg.png"
response = requests.get(logo_url)
logo_image = Image.open(BytesIO(response.content)).convert("RGBA")
# Resize logo to medium size (15% of image width)
base_width, base_height = base_image.size
logo_scale = 0.15
new_logo_width = int(base_width * logo_scale)
aspect_ratio = logo_image.height / logo_image.width
new_logo_height = int(new_logo_width * aspect_ratio)
logo_resized = logo_image.resize((new_logo_width, new_logo_height), Image.LANCZOS)
# Set opacity to 60%
alpha = logo_resized.split()[3]
alpha = ImageEnhance.Brightness(alpha).enhance(0.6)
logo_resized.putalpha(alpha)
# Position logo in bottom-right corner
position = (base_width - new_logo_width - 10, base_height - new_logo_height - 10)
# Paste logo onto original image
combined = base_image.copy()
combined.paste(logo_resized, position, logo_resized)
# Save the result
combined.save("edited_with_capcut_logo.png")
print("✅ Saved as 'edited_with_capcut_logo.png'")
Looks like the issue’s not with react-export-excel itself but with how npm is trying to grab one of its dependencies over SSH from GitHub. Your network or firewall is probably blocking port 22, which is why it’s timing out.
I’d switch Git from SSH to HTTPS so it can bypass that restriction:
git config --global url."https://github.com/".instead of [email protected]:
Then try installing again.
If it still gives you trouble, you might just want to replace react-export-excel, it’s pretty outdated. I’ve had better luck using the xlsx + file saver combo, and it’s actively maintained.
Could this problem be solved by anyone?
Based on @Pete Becker's answer, I decided to use the following lock-less method: Prepare the output in a std::stringstream and send it to std::cerr in one (expected to be atomic) call.
#include <iostream>
#include <sstream>
[...]
std::stringstream lineToPrint;
lineToPrint << " Hello " << " World " << std::endl;
std::cerr << lineToPrint.str();
There are (at least) two ways you could go about it, seeing that the column structure is identical in the two files.
You could use a Read Range activity on the source Excel file to copy, and an Append Range activity on the destination file. Both of these activities need to be in an Excel Process Scope container.
Another way to go about it could be to read both Excel files (Read Range) and use a Merge Data Table activity to merge the two, before using a Write Range activity to write the entirety back to the destination file.
Best Regards
Soren
That is the expected behaviour. The line apex.item("P1_ERROR_FLAG").setValue("ERROR");
sets the value of the page item on the client side. Observe the network tab in the browser console - there will not be communication with the server when this happens. The value gets sent to the client in any of the following cases:
The post does not say when this code executes but I would create a dynamic action on change of P1_ERROR_FLAG that has an action of execute serverside code, items to submit set to P1_ERROR_FLAG and code NULL;
. This will submit that page item to the server.
There might be better solutions for your use case but then please provide more info (as much as possible ) about how the page is set up: at what point do you need the P1_ERROR_FLAG value and how is it used ?
After switch from 11g to 12c, I use Altova XMLSpy.
Here is a video saying how to do it:
https://www.youtube.com/watch?v=piVbWtChd6I
And one more nice feature - XSLT / XQuery Back-mapping in Altova XMLSpy:
https://www.youtube.com/watch?v=lK1EDLbxxyo
While Writing this question I fiddled around some more and found a solution, but since I haven't found a similar question with a working answer so far, I decided to post this question anyway, including the answer - I hope that's ok.
For some reason, setting the environment variables using solr.in.sh
doesn't work. However, setting them via compose's environment:
, works just fine, so just adjusting this block to
environment:
ZK_HOST: [SELF-IP]:2181
SOLR_OPTS: -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005 -Djetty.host=[SELF-IP]
SOLR_TIMEZONE: Europe/Berlin
SOLR_HOST: [SELF-IP]
worked out sufficiently, no host-mode required.
# Construct the path to the PyQt6 plugins directory
# pyqt6_plugins_path = '/opt/python-venv/venv-3.11/lib/python3.11/site-packages/PyQt6/Qt6/plugins'
pyqt6_plugins_path = os.path.join(sys.prefix, 'lib', f'python{sys.version_info.major}.{sys.version_info.minor}', 'site-packages', 'PyQt6', 'Qt6', 'plugins')
# Set QT_PLUGIN_PATH to include both the PyQt6 plugins and the system Qt plugins
os.environ['QT_PLUGIN_PATH'] = f'{pyqt6_plugins_path}:/usr/lib/qt6/plugins'
# Set the Qt Quick Controls style for Kirigami to prevent the "Fusion" warning
os.environ["QT_QUICK_CONTROLS_STYLE"] = "org.kde.desktop"
app = QGuiApplication(sys.argv)
engine = QQmlApplicationEngine()
# Add the system QML import path
engine.addImportPath("/usr/lib/qt6/qml")
.btn.disabled,
.btn[disabled],
fieldset[disabled] .btn {
cursor: not-allowed;
...
}
Posting an answer if anyone has this exact problem - kudos to @Grismar in comments
Setting ssl_verify_client optional_no_ca;
will allow the handshake to complete and $ssl_client_verify
will be set to FAILED:unable to verify the first certificate
which is what I wanted to achieve. It will still work as before when the client has no cert at all (ssl_client_verify
is set to NONE
)
You probably want
.CreatedSinceElapsed time since the image was created
"Certified keyword translation services in Doha, Qatar — perfect for businesses, websites, and marketing. Fast, accurate, and culturally adapted to enhance your global reach. Learn more: https://eztranslationservice.com/"
When you talk about production and testing, then I would assume, you would maintain two seperate instances of your service side-by-side. One for testing and one for production. That's because you typically do not want to have to shut down your production application just for testing a new version.
So I would start two instances, one with TEST_MODE and one with PRODUCTION set. You could do that by running your python script twice, you'll probably want to create two batch files that first set the correct ENV variables and then run the frontend and backend scripts. Depending on those two ENV variables, you set a different database URL as well as a different frontend URL.
i face same issue. I use SQLCMD or BCP method to convert the file as UTF-8. Please see my SP below for details
ALTER PROCEDURE [wmwhse1].[SP_CUSTOMER_GetLoadDataLastHourEmail]
@StorerKeys NVARCHAR(500) = 'XYZ',
@EmailTo NVARCHAR(255) = '[email protected]',
@EmailSubject NVARCHAR(255) = '[PROD] CUSTOMER - Load Data Report Hourly'
AS
BEGIN
SET NOCOUNT ON;
DECLARE @FileName NVARCHAR(255);
DECLARE @FilePath NVARCHAR(500);
DECLARE @EmailBody NVARCHAR(MAX);
DECLARE @CurrentDateTime NVARCHAR(50);
DECLARE @HtmlTable NVARCHAR(MAX);
DECLARE @RecordCount INT;
DECLARE @BcpCommand NVARCHAR(4000);
BEGIN TRY
-- Generate timestamp for filename
SET @CurrentDateTime = REPLACE(REPLACE(CONVERT(NVARCHAR(50), GETDATE(), 120), '-', ''), ':', '');
SET @CurrentDateTime = REPLACE(@CurrentDateTime, ' ', '_');
SET @FileName = 'LoadDataReport_' + @CurrentDateTime + '.csv';
-- Set file path - ensure this directory exists and has write perHELLOsions
SET @FilePath = 'C:\temp\' + @FileName;
-- Get data for HTML table and record count
DECLARE @TempTable TABLE (
storerkey NVARCHAR(50),
MANIFEST NVARCHAR(50),
EXTERNALORDERKEY2 NVARCHAR(100),
LOADSTOP_EDITDATE DATETIME
);
INSERT INTO @TempTable
EXEC [wmwhse1].[SP_CUSTOMER_GetLoadDataLastHourData] @StorerKeys = @StorerKeys;
SELECT @RecordCount = COUNT(*) FROM @TempTable;
PRINT 'Records found in temp table: ' + CAST(@RecordCount AS NVARCHAR(10));
-- Only proceed if we have data
IF @RecordCount > 0
BEGIN
-- Create a global temp table for BCP export
IF OBJECT_ID('tempdb..##TempLoadData') IS NOT NULL
DROP TABLE ##TempLoadData;
CREATE TABLE ##TempLoadData (
storerkey NVARCHAR(50),
MANIFEST NVARCHAR(50),
EXTERNALORDERKEY2 NVARCHAR(100),
LOADSTOP_EDITDATE VARCHAR(50) -- Changed to VARCHAR for consistent formatting
);
INSERT INTO ##TempLoadData
SELECT
storerkey,
MANIFEST,
EXTERNALORDERKEY2,
CONVERT(VARCHAR(50), LOADSTOP_EDITDATE, 120)
FROM @TempTable;
PRINT 'Global temp table created with ' + CAST(@@ROWCOUNT AS NVARCHAR(10)) + ' records';
-- Method 1: Try SQLCMD approach first (more reliable than BCP for this use case)
SET @BcpCommand = 'sqlcmd -S' + @@SERVERNAME + ' -d SCPRD -E -Q "SET NOCOUNT ON; SELECT ''storerkey,MANIFEST,EXTERNALORDERKEY2,LOADSTOP_EDITDATE''; SELECT storerkey + '','' + ISNULL(MANIFEST,'''') + '','' + ISNULL(EXTERNALORDERKEY2,'''') + '','' + LOADSTOP_EDITDATE FROM ##TempLoadData ORDER BY LOADSTOP_EDITDATE DESC" -o "' + @FilePath + '" -h -1 -w 8000';
PRINT 'Executing SQLCMD: ' + @BcpCommand;
EXEC xp_cmdshell @BcpCommand;
-- Check if file was created and has content
DECLARE @CheckFileCommand NVARCHAR(500);
SET @CheckFileCommand = 'dir "' + @FilePath + '"';
PRINT 'Checking if file exists:';
EXEC xp_cmdshell @CheckFileCommand;
-- Alternative Method 2: If SQLCMD doesn't work, try BCP with fixed syntax
DECLARE @FileSize TABLE (output NVARCHAR(255));
INSERT INTO @FileSize
EXEC xp_cmdshell @CheckFileCommand;
-- If file is empty or doesn't exist, try BCP method
IF NOT EXISTS (SELECT 1 FROM @FileSize WHERE output LIKE '%' + @FileName + '%' AND output NOT LIKE '%File Not Found%')
BEGIN
PRINT 'SQLCMD failed, trying BCP method...';
-- Create CSV header
DECLARE @HeaderCommand NVARCHAR(500);
SET @HeaderCommand = 'echo storerkey,MANIFEST,EXTERNALORDERKEY2,LOADSTOP_EDITDATE > "' + @FilePath + '"';
EXEC xp_cmdshell @HeaderCommand;
-- BCP data export to temp file
SET @BcpCommand = 'bcp "SELECT ISNULL(storerkey,'''') + '','' + ISNULL(MANIFEST,'''') + '','' + ISNULL(EXTERNALORDERKEY2,'''') + '','' + ISNULL(LOADSTOP_EDITDATE,'''') FROM ##TempLoadData ORDER BY LOADSTOP_EDITDATE DESC" queryout "' + @FilePath + '_data" -c -T -S' + @@SERVERNAME + ' -d SCPRD';
PRINT 'Executing BCP: ' + @BcpCommand;
EXEC xp_cmdshell @BcpCommand;
-- Append data to header file
DECLARE @AppendCommand NVARCHAR(500);
SET @AppendCommand = 'type "' + @FilePath + '_data" >> "' + @FilePath + '"';
EXEC xp_cmdshell @AppendCommand;
-- Clean up temp file
SET @AppendCommand = 'del "' + @FilePath + '_data"';
EXEC xp_cmdshell @AppendCommand;
END
-- Final file check
PRINT 'Final file check:';
EXEC xp_cmdshell @CheckFileCommand;
END
ELSE
BEGIN
-- Create empty CSV with headers only
DECLARE @EmptyFileCommand NVARCHAR(500);
SET @EmptyFileCommand = 'echo storerkey,MANIFEST,EXTERNALORDERKEY2,LOADSTOP_EDITDATE > "' + @FilePath + '"';
EXEC xp_cmdshell @EmptyFileCommand;
PRINT 'Created empty CSV file with headers only';
END
-- Build HTML table (same as before)
SET @HtmlTable = '
<style>
table { border-collapse: collapse; width: 100%; font-family: Arial, sans-serif; }
th { background-color: #4CAF50; color: white; padding: 12px; text-align: left; border: 1px solid #ddd; }
td { padding: 8px; border: 1px solid #ddd; }
tr:nth-child(even) { background-color: #f2f2f2; }
tr:hover { background-color: #f5f5f5; }
.summary { background-color: #e7f3ff; padding: 10px; margin: 10px 0; border-left: 4px solid #2196F3; }
</style>
<div class="summary">
<strong>Report Summary:</strong><br/>
Generated: ' + CONVERT(NVARCHAR(50), GETDATE(), 120) + '<br/>
Storer Keys: ' + @StorerKeys + '<br/>
Time Range: Last 1 hour<br/>
Total Records: ' + CAST(@RecordCount AS NVARCHAR(10)) + '<br/>
<span style="color: green;"><strong>File Encoding: UTF-8</strong></span>
</div>
<table>
<thead>
<tr>
<th>Storer Key</th>
<th>Manifest</th>
<th>External Order Key</th>
<th>Load Stop Edit Date</th>
</tr>
</thead>
<tbody>';
-- Add table rows
IF @RecordCount > 0
BEGIN
SELECT @HtmlTable = @HtmlTable +
'<tr>' +
'<td>' + ISNULL(storerkey, '') + '</td>' +
'<td>' + ISNULL(MANIFEST, '') + '</td>' +
'<td>' + ISNULL(EXTERNALORDERKEY2, '') + '</td>' +
'<td>' + CONVERT(NVARCHAR(50), LOADSTOP_EDITDATE, 120) + '</td>' +
'</tr>'
FROM @TempTable
ORDER BY LOADSTOP_EDITDATE DESC;
END
SET @HtmlTable = @HtmlTable + '</tbody></table>';
-- Handle case when no data found
IF @RecordCount = 0
BEGIN
SET @HtmlTable = '
<div class="summary">
<strong>Report Summary:</strong><br/>
Generated: ' + CONVERT(NVARCHAR(50), GETDATE(), 120) + '<br/>
Storer Keys: ' + @StorerKeys + '<br/>
Time Range: Last 1 hour<br/>
<span style="color: orange;"><strong>No records found for the specified criteria.</strong></span>
</div>';
END
-- Create email body
SET @EmailBody = 'Please find the Load Data Report for the last hour below and attached as UTF-8 encoded CSV.
' + @HtmlTable + '
<br/><br/>
<p style="font-size: 12px; color: #666;">
This is a system generated email, please do not reply.<br/>
CSV file is encoded in UTF-8 format.
</p>';
-- Send email with HTML body and UTF-8 CSV attachment
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'HELLO',
@recipients = @EmailTo,
@subject = @EmailSubject,
@body = @EmailBody,
@body_format = 'HTML',
@file_attachments = @FilePath;
-- Clean up
IF OBJECT_ID('tempdb..##TempLoadData') IS NOT NULL
DROP TABLE ##TempLoadData;
-- Optionally delete the file after sending
DECLARE @DeleteCommand NVARCHAR(500);
SET @DeleteCommand = 'del "' + @FilePath + '"';
EXEC xp_cmdshell @DeleteCommand;
PRINT 'Email sent successfully with UTF-8 CSV attachment: ' + @FileName;
PRINT 'Records processed: ' + CAST(@RecordCount AS NVARCHAR(10));
END TRY
BEGIN CATCH
-- Clean up in case of error
IF OBJECT_ID('tempdb..##TempLoadData') IS NOT NULL
DROP TABLE ##TempLoadData;
DECLARE @ErrorMessage NVARCHAR(4000) = ERROR_MESSAGE();
DECLARE @ErrorSeverity INT = ERROR_SEVERITY();
DECLARE @ErrorState INT = ERROR_STATE();
PRINT 'Error occurred while sending email: ' + @ErrorMessage;
RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState);
END CATCH
END
I think I figure out the solution myself, I want to post the solution for windows local machine here; thanks for @Wayne 's suggestion that "It's just that making it effectively work can be super tricky depending on your system".
I open the powershell of windows, type the following command
[System.IO.File]::WriteAllBytes("$env:TEMP\ctrl-d.txt", @(4))
Then I open the file using command:(Open a folder,type the following command in the address field)
%TEMP%\ctrl-d.txt
then ctrl-A + ctrl-C to copy the character to clipboard in windows system
paste that character into the interactive mode prompt
I am able to get back to ipdb normal mode instead of interactive mode.
You can see the result in the picture:
Did you use the correct mediaID/mediaType for reels and videos?
I'll share my own basic CLI for posting images and videos to Instagram, and you can see how the use the media type correctly.
You can check the code snippet.
func createMediaContainer() (string, error) {
endpoint := fmt.Sprintf("https://graph.instagram.com/%s/%s/media", config.Version, config.IGID)
data := url.Values{}
if mediaType == "video" {
data.Set("media_type", "REELS")
data.Set("video_url", mediaURL)
} else {
data.Set("image_url", mediaURL)
}
data.Set("caption", caption)
data.Set("access_token", config.Token)
resp, err := http.PostForm(endpoint, data)
if err != nil {
return "", err
}
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
if resp.StatusCode != 200 {
return "", fmt.Errorf("API error: %s", string(body))
}
return parseID(body), nil
}
You can try adding --noweb argument.
As an astronomy buff, I can offer the size of a star vs. the lifetime of a star as an example of something which as input increases, output decreases:
Our sun should burn for about 10 billion years (and we're about halfway there), but a star 10 times more massive will burn about 3,000 times brighter and live only about 20-25 million years. I'm not sure of the exact big-O or little-o equations, but astronomers have known this for some time: more massive stars burn exponentially brighter (and therefore live much less time) than smaller stars.
Think of a hotel front desk. You walk up and say,
“Please send someone to clean my room.”
You don’t specify who that is because it depends on which housekeeper is working.
The front desk checks the schedule, like a vtable.
The person assigned at that moment goes to clean your room.
In dynamic dispatch, your code makes a request to call a function. At runtime, the program checks which specific implementation to run before sending it to do the job.
I had this problem in several versions Now I have this problem in version 2024.3.3
I just cleared the cache and the problem was solved:
File > Invalidate Caches.. > Clear file system cache and Local History (check) > INVALIDATE AND RESTART
1. Install the Capacitor AdMob Plugin:
bash
npm install @capacitor-community/admob
npx cap sync
2. Configure AdMob Plugin: Add the following to your capacitor.config.ts
:
typescript
import { CapacitorConfig } from '@capacitor/core';
const config: CapacitorConfig = {
plugins: {
AdMob: {
appId: 'ca-app-pub-xxxxxxxx~xxxxxxxx', // Your AdMob App ID
testingDevices: ['YOUR_DEVICE_ID'], // For testing
},
},
};
Step 1: Initialize AdMob in your React app
typescript
import { AdMob, AdMobNative, NativeAdOptions } from '@capacitor-community/admob';
// Initialize AdMob
await AdMob.initialize({
initializeForTesting: true, // Remove in production
});
Step 2: Create Native Ad Component
typescript
import React, { useEffect, useRef } from 'react';
const NativeAdComponent: React.FC = () => {
const adRef = useRef<HTMLDivElement>(null);
useEffect(() => {
const loadNativeAd = async () => {
const options: NativeAdOptions = {
adId: 'ca-app-pub-xxxxxxxx/xxxxxxxx', // Your Native Ad Unit ID
adSize: 'MEDIUM_RECTANGLE',
position: 'CUSTOM',
margin: 0,
x: 0,
y: 0,
};
try {
await AdMobNative.createNativeAd(options);
await AdMobNative.showNativeAd();
} catch (error) {
console.error('Error loading native ad:', error);
}
};
loadNativeAd();
return () => {
AdMobNative.hideNativeAd();
};
}, []);
return <div ref={adRef} id="native-ad-container" />;
};
Step 3: Platform-specific Configuration
For iOS (ios/App/App/Info.plist
):
xml
<key>GADApplicationIdentifier</key>
<string>ca-app-pub-xxxxxxxx~xxxxxxxx</string>
<key>SKAdNetworkItems</key>
<array>
<!-- Add SKAdNetwork IDs -->
</array>
For Android (android/app/src/main/AndroidManifest.xml
):
xml
<meta-data
android:name="com.google.android.gms.ads.APPLICATION_ID"
android:value="ca-app-pub-xxxxxxxx~xxxxxxxx"/>
typescript
const CustomNativeAd: React.FC = () => {
const [nativeAdData, setNativeAdData] = useState(null);
useEffect(() => {
const loadCustomNativeAd = async () => {
try {
const result = await AdMobNative.loadNativeAd({
adUnitId: 'ca-app-pub-xxxxxxxx/xxxxxxxx',
adFormat: 'NATIVE_ADVANCED',
});
setNativeAdData(result.nativeAd);
} catch (error) {
console.error('Failed to load native ad:', error);
}
};
loadCustomNativeAd();
}, []);
return (
<div className="native-ad-container">
{nativeAdData && (
<>
<img src={nativeAdData.icon} alt="Ad Icon" />
<h3>{nativeAdData.headline}</h3>
<p>{nativeAdData.body}</p>
<button onClick={() => AdMobNative.recordClick()}>
{nativeAdData.callToAction}
</button>
</>
)}
</div>
);
};
Test Thoroughly: Use test ad unit IDs during development
Error Handling: Always implement proper error handling for ad loading failures
User Experience: Ensure native ads blend seamlessly with your app's design
Performance: Load ads asynchronously to avoid blocking the UI
Compliance: Follow Google AdMob policies for native ad implementation
If you're facing challenges with complex React implementations or need expert guidance for your mobile app development project, consider partnering with a professional reactjs app development company.
You do need to sort the attributes in a DER-encoded SET. This is critical for CAdES which computes the hash of SignedAttributes by re-assembling them as an explicit SET before computing the digest. If you didn’t sort them the same way, the hashes won’t match.
Fork flutter_udid on GitHub.
In your fork, change jcenter()
→ mavenCentral()
.
Reference your fork in pubspec.yaml
:
dependencies:
flutter_udid:
git:
url: paste your forked repo url
ref: main
now your project will now use your modified fork rather than the original package from pub.dev
Kindly go through this link, https://aws.amazon.com/blogs/storage/connect-snowflake-to-s3-tables-using-the-sagemaker-lakehouse-iceberg-rest-endpoint/.
if you need a process in an active user session, but you want to run it remotely, then you will have to use a bundle of scheduler and events. You will need to create a task separately in the task scheduler, so that the task always runs in an active user session. Set events as a trigger and you can set up a filter for a keyword. Next, you will only need to remotely trigger the event/log entry.
make sure your alpine.js is not load twice. if you are using livewire version 3 you dont need to load alpine anywhere else.
you can check your livewire version in composer.json. in my case its look like this
{
"require": {
"livewire/livewire": "^3.6.4",
},
}
This may help others:
I had very similar outputs (almost the same ones) after runing all three commands below:
service docker start
systemctl status docker.service
journalctl -xe
Nothing along stackoverflow worked. I reviewd the step-by-step installation on my WSL2 Ubuntu (standard) (env: Windows 11 Pro) and I realized I'd made:
sudo nano /etc/wsl.conf
and inserted this in "wsl.conf" file:
[boot]
command = service docker start
After deleted that from "wsl.conf" everything worked well.
# Further reduce image size to ensure it fits on the PDF page
max_width = 6 * inch
max_height = 8 * inch
doc = SimpleDocTemplate(output_pdf_path, pagesize=letter)
story = [RLImage(input_image_path, width=max_width, height=max_height)]
doc.build(story)
output_pdf_path
I have a doubt in this diagram that is how can all the angles between the H- atoms are 109°. The complete is angle is 360° but here the three angles between the H- atoms are 109° and if we add three 109° we get 327° but the angle should come 360° and so I have doubt that how so it comes
Fixed the issue by just deleted the node_modules and package_lock.json and did npm i inside the project folder.
I suspect that when you are setting the Icon in "displayNotification()" could be the culprit
private fun displayNotification(){
...
// set the small icon using the Icon created from bitmap (API 23+)
builder.setSmallIcon(Icon.createWithBitmap(bitmap))
.setContentTitle("Simple Notification")
...
}
it looks like your createWithBitmap() function could be repeatedly called by the builder, maybe assign Icon.createWithBitmap() to a variable outside of the builder?
ie:
val bitmapIcon = Icon.createWithBitmap(bitmap)
builer.setSmallIcon(bitmapIcon)
[im guessing a little here, im still learning kotlin]
There is also the fact that MainApplication.kt is the default entry point of the program, without making changes in the Manifest file to point the "launcher" code.
let dataPath: String = "MyDB"
//var db_uninitialized: OpaquePointer? // 👈 Reference #0 -> Never used. Will fail if called.
func openDatabase() -> OpaquePointer? {
let filePath = try! FileManager.default.url ( for: .documentDirectory , in: .userDomainMask , appropriateFor: nil , create: false ).appendingPathComponent ( dataPath )
var db: OpaquePointer? = nil
if sqlite3_open ( filePath.path , &db ) != SQLITE_OK {
debugPrint ( "Cannot open DB." )
return nil
}
else {
print ( "DB successfully created." )
return db
}
}
// 👇 Reference #1 -> PRIMARY KEY column must be `unique.` `Unique` means no other rows in the column contain an equal value.
func createStockTable() {
let createTableString = """
CREATE TABLE IF NOT EXISTS Stocks (
id INTEGER PRIMARY KEY,
stockName STRING,
status INT,
imgName STRING,
prevClose DOUBLE,
curPrice DOUBLE,
yield DOUBLE,
noShares INT,
capitalization DOUBLE,
lastUpdated String
);
"""
var createTableStatement: OpaquePointer? = nil
if sqlite3_prepare_v2 ( initialized_db , createTableString , -1 , &createTableStatement , nil ) == SQLITE_OK {
if sqlite3_step ( createTableStatement ) == SQLITE_DONE {
print ( "Stock table is created successfully" )
} else {
print ( "Stock table creation failed." )
}
sqlite3_finalize ( createTableStatement )
}
sqlite3_close ( initialized_db ) // 👈 Reference #2 -> Connection lost and will need to be recreated for insertion function.
}
// 👇 Reference #3 -> extension on `OpaquePointer?` declared.
extension OpaquePointer? {
func insertStocks ( id: Int, stockName: String, status: Int, imgName: String, prevClose: Double, curPrice: Double, yield: Double, noShares: Int, capitalization: Double, lastUpdated: String) -> Bool {
let insertStatementString = "INSERT INTO Stocks (id, stockName, status, imgName, prevClose, curPrice, yield, noShares, capitalization, lastUpdated) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?);"
var insertStatement: OpaquePointer? = nil
if sqlite3_prepare_v2 ( self , insertStatementString , -1, &insertStatement , nil ) == SQLITE_OK {
sqlite3_bind_int ( insertStatement , 1 , Int32 ( id ) )
sqlite3_bind_text ( insertStatement , 2 , ( stockName as NSString ).utf8String , -1 , nil )
sqlite3_bind_int ( insertStatement , 3 , Int32(status))
sqlite3_bind_text ( insertStatement , 4 , ( imgName as NSString ).utf8String, -1 , nil )
sqlite3_bind_double ( insertStatement , 5 , Double ( prevClose ) )
sqlite3_bind_double ( insertStatement , 6 , Double ( curPrice ) )
sqlite3_bind_double ( insertStatement , 7 , Double ( yield ) )
sqlite3_bind_int64 ( insertStatement , 8 , Int64 ( noShares ) )
sqlite3_bind_double ( insertStatement , 9 , Double ( capitalization ) )
sqlite3_bind_text ( insertStatement , 10 , ( lastUpdated as NSString ).utf8String, -1, nil)
if sqlite3_step ( insertStatement) == SQLITE_DONE {
print("Stock Entry was created successfully")
sqlite3_finalize(insertStatement)
return true
} else {
print("Stock Entry Insert failed")
return false
}
} else {
print("INSERT Statement has failed")
return false
}
}
}
/// 👇 Reference #5 -> Change `id` input from `1` to `Int.random(in: 0...10000)` to satisfy `unique` constraint. Note this could still fail if the generated integer already exist in the `id` column.
func addStocks() {
let result = initialized_db.insertStocks ( id: Int.random(in: 0...10000), stockName: "Tulsa Motors", status: 1, imgName: "Tulsa_logo", prevClose: 125.18, curPrice: 125.18, yield: 0.025, noShares: 14357698, capitalization: .pi , lastUpdated: "2025-05-01 17:00:00")
print ( "Database insertion result: \( result )" )
}
var initialized_db = openDatabase() // 👈 Reference #6 -> Captured instance of Database connection.
createStockTable() // 👈 Reference #7 -> Connection closed at the end of function.
initialized_db = openDatabase() // 👈 Reference #8 -> Connection reestablished.
addStocks() // 👈 Reference #9 -> Dont forget to close your connection, finalize, and clean up.
If you wanted to make the id column autoincrement, like Douglas W. Palme said, you can omit it from you bind function, adjust your column indices... I would also recommend you declare it in you `creationStatement` for completeness sake.
let createTableString = """
CREATE TABLE IF NOT EXISTS Stocks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
stockName STRING,
status INT,
imgName STRING,
prevClose DOUBLE,
curPrice DOUBLE,
yield DOUBLE,
noShares INT,
capitalization DOUBLE,
lastUpdated STRING
);
"""
Best regards.
Select the Node2D and not the Sprite2D. I solved it finally.
GridDB does not natively support querying nested JSON values directly within a STRING column. The current capabilities of GridDB for handling JSON payloads stored as strings do not include querying nested elements within the JSON. The approach you are currently using—selecting all rows, parsing the JSON in Java, and then filtering manually—is the typical method for dealing with JSON data stored as strings in GridDB.
If you require the ability to query nested JSON values efficiently, you may need to consider a different database system that has built-in support for JSON data types and allows querying of nested JSON elements directly, such as MongoDB. MongoDB, for example, provides powerful querying capabilities for JSON documents, including the ability to query nested fields.
In summary, with GridDB, you will need to handle JSON parsing and filtering within your application code, as native querying of nested JSON is not supported.
=LET(pvt,PIVOTBY(B2:B27,A2:A27,B2:B27,LAMBDA(x,ROWS(x)),,0,,0),sort,MATCH({"","Jan","Feb","Mar","Apr","May"},TAKE(pvt,1)),CHOOSECOLS(pvt,sort))
any ideas why my UITabBar looks like having a background on iOS 26? build on Xcode 26 beta.
So I figured out what the issue was, simply removing the error from the end of the method signature seems to solve my problem, and the Method is now accessible from other methods across my package.
The question is about the behavior of Swift’s Array.max when the array is of type Double and contains one or more NaN values alongside valid numeric values.
The official documentation simply states that max() returns “the sequence’s maximum element” and that it returns nil if the array is empty. However, this leaves ambiguity in cases where the concept of “maximum” is mathematically undefined, such as when NaN is involved, since any comparison with NaN is false.
The user points out that if “maximum” means “an element x such that x >= y for every other y in the array,” then an array containing a NaN technically doesn’t have a maximum element at all. That raises the question: should max() return nil, NaN, or something else in this scenario?
Through experimentation, the user observed that Swift’s implementation seems to ignore NaN values when determining the maximum and instead returns the maximum of the remaining non-NaN numbers. This approach is practical, but it’s not explicitly documented, which makes developers unsure whether it’s a guaranteed behavior or just an implementation detail that could change.
The user is seeking official Apple documentation that explicitly confirms this handling of NaN in Array.max, rather than having to infer it from experiments.
The initialization vector is a salt - it's a random string that makes different encryption sessions uncorrelated and this makes it more difficult to crack the encryption/decryption key. By definition, the salt/iv influences the output of the encryption algorithm, and also the output of the decryption algorithm. By changing the IV in the middle of an encrypt + decrypt, you are essentially corrupting your decryption process and you were fortunate not to get total garbage as a result.
While the initialization vector does NOT have to be secret, it DOES have to be different for every new piece of plaintext that is encrypted, otherwise the ciphertexts will all be correlated and an attacker will have an easier time cracking your encryption key.
here is an update on adding a new elide widget text box in-between the other 2.
import tkinter as tk
root = tk.Tk()
root.title("Testing widgets for Elide")
# create 'line number' text box
line_text = tk.Text(root, wrap="none", width=5, insertwidth=0) # don't want the cursor to appear here
line_text.pack(fill="y", side="left")
# added > create elide button text box
line_textR = tk.Text(root, wrap="none", width=2, insertwidth=0) # don't want the cursor to appear here
line_textR.pack(fill="y", side="left")
# create 'code' text box
text_box = tk.Text(root, wrap="none")
text_box.pack(fill="both", expand=True, side="left")
# add a tag to line number text box (need text to be at the right side)
line_text.tag_configure("right", justify="right")
# add some text into the text boxes
for i in range(13):
line_text.insert("end", "%s \n" % (i+1)) # add line numbers into line text box (now on the left side)
line_text.tag_add("right", "1.0", "end")
for i in range(13):
text_box.insert("end", "%s \n" % ("some text here at line number #" + str(i+1))) # add some text int the main text box (now on the right side)
for i in range(13):
line_textR.insert("end", " \n") # add blank space on each line for the elide widget text box _ this allows for widget placement by line number (now in the middle)
# add button to use as elide +/- (inside text boxes? _ not sure which widget is correct (button, label, image)?
elide_button = tk.Button(line_textR, text="-")
line_textR.window_create("11.0", window=elide_button) # *** test ***
root.mainloop()
To make the two series of samples independent and uncorrelated, I suggest you randomly select samples from the second series (mix their order up) to make the two series uncorrelated.
from pdf2image import convert_from_path
# Convertir el PDF a imágenes JPG
pages = convert_from_path(file_path, dpi=200)
jpg_paths = []
for i, page in enumerate(pages):
jpg_path = f"/mnt/data/linea_tiempo_fosa_mariana_page\_{i+1}.jpg"
page.save(jpg_path, "JPEG")
jpg_paths.append(jpg_path)
jpg_paths
I think you want the in
operator, but I haven't tested the following:
for (let idx = 0; idx < collection.length; idx++) {
const localIdx = idx;
if (idx in collection) {
collection[localIdx] = collection[localIdx] * 2;
}
}
What resource group are you providing in the command to create the deployment?
az deployment group create --resource-group
This is the scope the deployment will be created it. You cannot create resources in 2 different resource groups in the same file just by using scope.
You should create a separate bicep file for creating the resources in the second RG and use that resource group name when running the command to create the deployment.
Although this is not exactly what you are asking for, Azure DevOps supports adding - retryCountOnTaskFailure
to a task that allows you to configure retries if the task fails.
Microsoft doc reference - https://learn.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml#number-of-retries-if-task-failed
No standards exist for direct matplotlib patch conversion - at least as far I'm aware. If you're just trying to create spherical ellipses, there may be a more robust approach using EllipseSkyRegion
from astropy-regions package.
University Course Management
Entities & Attributes
Department (DeptName PK, DeptNo, HeadID, StartDate)
Course (CourseCode PK, Title, CreditValue)
Professor (ProfID PK, Name, ContactInfo, Address, Salary, Gender, DOB)
Student (StudentID PK, Name, ContactInfo, Address, Gender, DOB)
Relationships
Department offers Course (1–M)
Department headed_by Professor (1–1)
Professor teaches Course (M–M, attribute: HoursPerWeek)
Professor supervised_by Professor (recursive)
Student enrolled_in Course (M–M, attribute: Grade)
AutoCAD’s built-in DBCONNECT lets you link drawing objects to external database records, but it does not automatically update label text from SQL updates in real time. To achieve what you want, you’ll need to use dbConnect with database-linked attributes or fields, then run a data update inside AutoCAD using DATAUPDATE (or manually refresh the link) after making changes in SQL. For dynamic updates without manual refresh, you’d need custom automation using AutoLISP, .NET API, or VBA to query SQL and push the values into AutoCAD objects. Autodesk’s dbConnect documentation is a good starting point.
So, the pros do it with pure debug or assembly and "assembly coders do it with routines".
https://en.wikipedia.org/wiki/Debug_(command)
https://en.wikipedia.org/wiki/Assembly_language
http://google.com/search?q=assembly+coders+do+it+with+routine
http://ftp.lanet.lv/ftp/mirror/x2ftp/msdos/programming/demosrc/giantsrc.zip
http://youtu.be/j7_Cym4QYe8
MS-DOS 256 bytes, Memories by "HellMood"
http://youtu.be/Imquk_3oFf4
http://www.sizecoding.org/wiki/Memories
http://pferrie.epizy.com/misc/demos/demos.htm
Of course other compilers also exist for different programming languages which is your flavor. I don't anymore like today's Apps that are huge in both RAM and application memory usage and we re using interpreters again. No wonder that smart phones require recharging every now and then.
https://en.wikipedia.org/wiki/Interpreter_(computing)
So now both AI and Python are a very bad power consumption after all...
-Sigma
WSL uses the Windows Proxy Settings. Setting a manual proxy in a new enough version of Windows 11 will allow `wsl --update --web-download` to work.
You can optionally use the AutoProxy setting, but I believe this will also be applied to all distributions that you have running as well, so beware.
Scotty's comment about the WSL manual download location from the Microsoft WSL Github is helpful. Before being able to use wsl update, I would go there and download the latest "Release" not the pre-releases for use with Docker. The download script mentioned in the comments above is interesting, as well.
You are using the new Observation Framework (iOS 17 / Swift 5.9), and the problem is that @Environment gives you the LoginViewModel object itself, but it is not directly bound.
To use $loginVM.showApiAlert in .alert, you need a two-way binding, but you don't have it because $loginVM does not exist in this context. You can wrap a property in Binding directly in .alert.
In some more complex scenarios these VM Options may help:
-Djava.rmi.server.hostname=localhost
-Dcom.sun.management.jmxremote.port=6799
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
Any other free port can be used instead of 6799.
For anyone in 2025:
The only solution really working is to log out of the account, then press Option+Ctr+E (to delete the cache on safari) and then re-login.
After that you should be able to just download the .p8 file as normal.
There is a new package that allow you to do that
https://www.npmjs.com/package/msw-request-assertions
Try to switch off:
"Settings" / "Developer Options" / "Verify apps over USB"
If you only request for a subset of name, email, and user profile, the list of test users is not taken into account.
According to the documentation, it is an exception (check the part that says "The only exception to this behavior...").
This applies to applications in Publishing status as Testing.
Try to also have a look in these tools: 1. Watcom C/C++ (It features tools for developing and debugging code for DOS, NEC PC-98, OS/2, Windows, and Linux operating systems, which are based upon 16-bit x86, 32-bit IA-32, or 64-bit x86-64 compatible processors. It is a true cross compiler)
https://en.wikipedia.org/wiki/Watcom_C/C%2B%2B
https://en.wikipedia.org/wiki/PC-98`` (a Japanese system)
https://en.wikipedia.org/wiki/Cross_compiler
And how to break the 32-bit code limit in MS-DOS and 640 kB up to 64 MB aka DOS/4G which was made widely popular by computer games like Doom or Tomb Raider.
https://en.wikipedia.org/wiki/DOS/4G
https://en.wikipedia.org/wiki/Doom_(1993_video_game)
https://www.computerworld.com/article/1563853/the-640k-quote-won-t-go-away-but-did-gates-really-say-it.html
If you run your BDDs with gradle use this task config:
(Note "--tags" argument with "not @sandbox" as a value for tests I want to exclude)
task("uat") {
description = "Runs user acceptance tests."
group = "verification"
dependsOn("assemble", "testClasses")
doLast {
val featurePath = project.findProperty("feature") ?: "src/test/resources/features"
javaexec {
mainClass.set("io.cucumber.core.cli.Main")
classpath = configurations["cucumberRuntime"] + sourceSets["main"].output + sourceSets["test"].output
args(
"--plugin",
"pretty",
"--plugin",
"html:build/cucumber/cucumber.html",
"--plugin",
"junit:build/cucumber/cucumber-junit.xml",
"--plugin",
"json:build/cucumber/cucumber.json",
"--glue",
"com.my.cool.package.uat",
"--tags",
"not @sandbox",
featurePath.toString(),
)
}
}
}
metadata
Instead of trying to disable validate_keys
in the type, you define metadata
as a hash that doesn’t declare its keys i.e., an “open” hash. In dry-validation DSL, that means just using .hash
with no nested schema, or .hash(any: :any)
.
Here’s the clean approach:
class MyContract < Dry::Validation::Contract
config.validate_keys = true
params do
required(:allowed_param).filled(:string)
optional(:metadata).hash
end
end
That’s it.
No nested schema means dry-schema will only check “is this a hash?” and won’t validate keys inside it even with validate_keys = true
.
class MyContract < Dry::Validation::Contract
config.validate_keys = true
params do
required(:allowed_param).filled(:string)
optional(:metadata).hash do
any(:any)
end
end
end
Both of these achieve your goal: metadata
can have any keys and values, but the top-level still rejects unexpected keys.
Unfortunately, it seem that Microsoft, owning GitHub, is not willing to allow us to be free of the AI integrations. As best as I can tell, as long as you do not set up CoPilot in your environment, then it should not be able to run queries. However, it seems unlikely that you can remove it.
In VS Code 1.103.1, there is a "Finish Setup" icon in the bottom tray next to the Notifications bell that can serve as an indicator that you are not set up to use any AI models. However, it would be reasonable to guess that all other telemetry collected by the application is contributing to their AI development and integration.
If you are resolved to work with an editor that does not include any AI integrations, then you may need to change editors.
a quick fix, changing the domain name, incase your initial domain is known and you want to make the App inaccessible.
settings--> Domain --> edit
Another option is to create a supertype of Company and Customer, called something like Member. Then the relationship is from Membership to Member. The one common data element of Member is the ID#. The other data elements are specific to the subtypes: Company (name, contact name) or Customer (first name, last name).
I tried your code.
The walls were being created but they were falling through the bottom of the screen never to be seen.
To fix this:
I added a StaticBody2D to the bottom of the screen to serve as a floor.
Added a CollisionShape2D to the StaticBody2D as a child so it does collision detection
Added a RectangleShape2D to the CollisionShape2D in the inspector and dragged it out in 2D to form the collision area.
Did the same CollisionShape2D and RectangleShape2D adding to the wall this time. Then it stops it falling through the floor
Cupy was outputting the floats as a cupy.array in the tuple comprehension above whereas numpy outputted the floats as floats. Changing the code to this produced the desired output:
import cupy as cp
def sortBis(mat: np.ndarray):
colInds = cp.lexsort(mat[:, 1:]) mat[:, 1:] = mat[:, 1:][:, colInds]
return mat
newMat = cp.array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., -1.]]
newMatSet.add(tuple(tuple(float(i) for i in row) for row in sortBis(newMat)))
I found another solution to this problem.
add:
{
"C_Cpp.codeAnalysis.clangTidy.args": [ "${default}", "-mlong-double-64" ],
}
to settings.json file. This will set the long-double size to 64 and so 128 won't be included.
I get a similar error when I compile from the command line manually without this option. But there it is about missing a stream << operator for the __float128 type. :-(
If you specifically want to keep the invalid `httpa://` protocol (perhaps for testing or a custom use case), you need to understand that **standard browsers and apps will reject it** since it’s not a recognized scheme like `http://` or `https://`.
---
### How to Use `httpa://` (Non-Standard Protocol)
#### Option 1: **For Development/Testing (Custom Protocol)**
If this is for a custom app or local environment (e.g., a mock API), you can:
1. **Register `httpa://` as a custom protocol** in your app (e.g., Electron, mobile app, or browser extension).
- Example (Electron.js):
```javascript
app.setAsDefaultProtocolClient('httpa');
```
- For **Android/iOS apps**, define it in the manifest/plist file.
2. **Use a URL handler** to intercept `httpa://` links and redirect/log them.
#### Option 2: **Replace with Standard Protocol**
If this was a typo, just correct it to `http://` or `https://` (recommended for production):
```text
https://tattoo.api/api-create?utm_source=APKPUREAds&...
If you need the literal text httpa://
for documentation/mockups (non-clickable):
httpa://tattoo.api/api-create?utm_...
httpa://
Won’t Work by Defaultrequests
in Python) only support registered protocols (e.g., http
, ftp
, data
).httpa://...
will typically result in:
AQIL FAROOQ MUHAMMAD
I just ran these two commands and it worked:
cd android/
./gradlew clean
then build again
I just downloaded my dream league soccer classic game but it's not working,it says it needs to download additional contents from Google play